Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I installed Windows 10 the other week - it kinda blew my mind how poor the install experience was.

The iso contained files greater than 4GB, which breaks fat32, which I'm sure many people are still using on flash drives. So I had to use an MS cmd-line tool to split the wim files manually and edit the install files. Why doesn't the installer just use smaller archive files?



sort by: page size:

On my Win10 system, that directory is ~600mb, ~300 files and 17 directories. Not advocating for this system, I just don't think it's that much of a problem.

So Windows has to actually render out the filenames and sizes somewhere to get the total amount of space used by the files?

That seems a bit ... non-optimal.


I've rarely had this work successfully on Windows... normally after several hours it pops up an indecipherable error message

presumably because it couldn't make the filesystem as small as I asked for, for whatever reason


Do you really do your work on a computer where an 80MB install is a problem? No argument on the file size limit.

Seems like a strange choice to have a maximum file size several orders of magnitude larger than your maximum volume size.

"All I can say is, this article is the tip of the ice berg on Windows I/O weirdness."

Well, then, is there a more detailed summary than this one that's accessible?

This one looks very useful and I'll use it, but to make the point about more info it'd be nice to know how they differ across Windows versions.

For example, I've never been sure about path length of 260 and file length of 255. I seem to recall these were a little different in earlier versions, f/l being 254 for instance. Can anyone clear that up for me?

Incidentally, I hit the 255/260 limit regularly, it's damn nuisance when copying stops because the path is say 296 or 320, or more.


... wow. Instead of hashing the file contents, it just checked file size? Who came up with this algorithm?! There's plenty of applications that use files of fixed large sizes initialized at install, for performance reasons or just simply so if you're going to run out of space you do so during interactive installation, rather than silently breaking later on.

Just naively translating the registry into a NTFS directory structure would require 1kb per value, simply because that's the size of a file record (NTFS already has an optimization to store small files directly in the file record if it fits in next to all the attributes and ACLs).

Also the Windows Filesystem driver stack is not very efficient for accessing many small files. It's built for flexibility and security, not speed.


It's probably not really 20GB. Most of what's in WinSxS is links to files in other directories, often multiple links to the same file, and the Windows shell "Properties" dialog will include every "copy" when calculating total size.

Aren't the files too big ?

If you're really that sensitive to size, may want to try 7z. I can usually get a few % smaller archive sizes than xz with faster decompression to boot. Of course then you might need to install a 7z lib which could be an issue.

I edited (not once) an assembly for AI that was almost 100MB in size in notepad. Took a few minutes to load, but then worked just fine. That was almost ten years back. I can't comprehend how something today can't open a 2MB file.

that is what gets me, why did the file get to 20g? At that point just ship a SQLite file.

This has very little to do with Windows, itself. The issue is explorer that tries to estimate sizes and put the files in the recycled bin. Also likely it checks all the permissions while counting.

Try the command line: "rmdir /s" and it's quick.


> Win32 on the other had usually caps everything at 260 total (MAX_FILE_LIMIT).

Nitpick: It’s MAX_PATH :) Also, there is a way around the limitation: prefix all (absolute) file paths with \\?\

https://docs.microsoft.com/en-us/windows/win32/fileio/maximu...


There are big files, and then there are Big files. Some files are just too big. At some point, it's easier to deal with smaller files.

You can generally get around windows' 255 absolute character length limits by using UNC paths. We had that problem at my last company, where the system would allow you to write files into a path but not delete them.

There isn't a modern reason for the limit either. NTFS supports longer path names, as does FAT if I remember correctly. Rather irritating.


> I mean the size column can be misleading sometimes while the File Manager figures itself out.

On windows? How are you getting that column to not be permanently blank on directories?


Lots of small files (particularly in a single directory) is a known failure mode of many OS filesystems. I remember putting a million files into MogileFS and finding that filesystem operations basically did not complete in any reasonable length of time.
next

Legal | privacy