Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You hit the nail on the head!

Like I've said in these comments, I'm constantly surprised by which settings people change and which ones they don't, so differing opinions on exa's defaults is nothing new. exa shares my opinion of wanting to see thousand separators and byte suffixes by default; ls, on the other hand, has no opinion because with file sizes that small, it didn't really have to choose what the output should be like.

People have said that in scripts you'd want to just output the file size in bytes, and they're right, otherwise the numbers won't sort correctly. If you're writing a script, though, you're going to be taking more care than if you were just wanted to list some files. I've lost count of the number of times I've listed a directory, given up trying to count the digits of the file sizes, then re-ran the command with `ls -h`, but I've never written a script without thinking about what the output should be like.



sort by: page size:

> > For example, exa prints human-readable file sizes by default (when would you not want that?)

When its being used as part of a script.


I almost never use the human-readable file sizes.

For this to get any traction it is going to need to accept every argument ls does and play nicely. Most people who work with nix systems work on many systems and some of them don't control the software loaded on them. If I have to remember that on this* system I have to use these arguments because I have exa, vs this system I have normal ls that is going to be a deal breaker for many.


Do you mean recursing into a directory and summing the file sizes of its contents? exa doesn't do this, sorry, but it's something I've been thinking of adding.

ls actually displays the 'file size' of the directory, which I've left out, as that number has never benefitted me, ever.


Surely this won't actually effect you when you want to do a script. If you want the size of a file, 'ls' (and most commands I think) gives you it in bytes, not KiB or KB or whatever...

I think what's really in discussion here is what the default should be. ls was originally conceived and implemented in a period when all the files you would be dealing with could be measured in bytes easily, and would almost never be over four digits. Many common files today (any MP3, most word processing documents, almost every image except for thumbnails, etc) are nine or more digits, and at that level it's very easy to lose track of a digit or two, which is misjudging by an order of magnitude or two.

Human readable units helps quite a bit with this, and I think that makes it a sane default. As long as you can specify you want to see bytes, and the sorting is done on the actual byte value, there's little lost because the majority of the time it will help rather than hurt.


This is cool, but it doesn't solve (in fact, exacerbates) my usual complaint with `ls` - I don't know what the arguments are. The example on the site is:

    exa -bghHliS
Argh! I want to be able to say `ls --size` to get the file sizes. I don't want to remember a million arguments.

> I almost never use the human-readable file sizes.

Same here. I love the idea, and i keep trying to use human-readable sizes in every command which supports them. But it turns out they're much less scannable than numbers in a common unit. How long does it take you to see which of these files is biggest:

  13k  potatoes.txt
   7M  tomatoes.txt
  128  recipe_ideas.txt
   1G  hot_sauce_formula.txt
How about now:

       13093  potatoes.txt
     7182642  tomatoes.txt
         128  recipe_ideas.txt
  1023984672  hot_sauce_formula.txt
Human-readable numbers also break all sorts of useful things like sorting (unless you have some fancypants sort which understands them), calculating totals with awk, sedding them into an expression to evaluate with $(()), etc.

"exa prints human-readable file sizes by default (when would you not want that?)"

When doing automated text-processing and wanting to easily do precise calculations, without having to deal with different units.


That only works when the files you want to compare are in the same folder and next to each other. If you just want to know how big a given file is, then you have to count the digits. And IIRC ls doesn't even print commas like that to make it human readable. It's just a blob of digits with more precision than is reasonably necessary for the vast majority of uses.

The main problem I can think of is that I'm so used to type cd and then ls... But OTOH it's as simple to fix as alias ls=exa

EDIT:

"exa prints human-readable file sizes by default (when would you not want that?)"

I actually use bytes a lot for certain progress calculations.

Also I get an error "exa: error while loading shared libraries: libhttp_parser.so.2.1: cannot open shared object file: No such file or directory" (Ubuntu 17.04)


I'm curious why this is one of your most-used commands. If you want to know the sizes of all files matching 'foo', this is a horribly inefficient way of going about it.

I came here to make exactly this comment but you read the post much sooner than me.

At least I learned that ls had a -h option. I don't know why I should use it on files but I routinely do df -h to check the size of file systems. That's a place where it is useful.


I mean the size column can be misleading sometimes while the File Manager figures itself out. Yeah Properties seems to work a little better. It's usually due to larger file sizes. Unlike the OP of the article it doesn't bother me enough, I've just seen it happen across OS' and have not had issues with it, if I need to know a file size, I have other means like the command line, of course sometimes the command line is another culprit of not showing a directory size as well. Like only showing 4 KiB for every directory at least on Ubuntu.

Seems like it comes on by default in Ubuntu/Debian based systems for bash. Also, if you want to see the file sizes in MB, KB etc. then use 'ls -lh' command.

Because I'm counting file size and not line numbers. That simple.

Good question. On OS X it seems the size of a directory is typically a multiple of 102 bytes (often 102 or 306). I ran my script again with "find / -type f" to select only regular files, and it showed file sizes ending in 2 being just slightly more common than 0. Odd sizes are still less common than even ones, though, so the overall conclusion is unchanged regardless of whether you consider checksumming of directory entries to be a valid application.

In fact this solution doesn't use either the number of files or their filenames to encode additional information - it uses the sizes of those files to encode the additional information.

CephFS is the first counterexample that comes to mind. Recursive size and filecounts are available through extended file attributes (e.g. getfattr -d -m ceph.dir.* /path/to/dir) and can be made to be default in e.g. ls -l output.

> I mean the size column can be misleading sometimes while the File Manager figures itself out.

On windows? How are you getting that column to not be permanently blank on directories?

next

Legal | privacy