Wow, thanks for the script. Surprising in its simplicity. I would have thought this use-case was popular enough to warrant specialized tools etc. Especially in the scientific community where they transfer large files.
The original Unix systems were really memory-constrained, and the standard utilities for processing text files often couldn't handle "long" lines (usually over 2048 bytes, sometimes less). So the `cut` utility divided an input file up into separate files each containing a section of each line. To put the file back together, the `paste` utility would merge the sections of each line from those files back together.
Yep and the Shar command that created a bash wrapper round sections of uuencoded data, so you could email a file in segments and conveniently recompose and run it to get the file back, without needing Shar at the other end. Good times.
It could be useful for changes in CSV files or text files with table-like formatting where you want to commit the change in just one specific column of a row, but not the other. I could also see it for binary files which aren't structured into lines.
Yeah, it seems like the kind of command that you only need because of a quirk in how the underlying system happens to work. Not something that should pollute the logic of the command, imo. I would expect a copy-on-write filesystem to be able to do this automatically for free.
Totally. C-x C-q on a dired buffer, and you can edit most of what's essentially an ls output as if it was a text file. C-c C-c to commit changes. I find myself using it very frequently for file management.
Yes, I use q all the time for slicing and dicing delimited files. The only problem I have with it is that the name can make it a littler harder to find if you don't remember the repo.
Since q will read stdin and write CVS to stdout you can chain several queries on the command line, or use it in series with other with other commands such as cat, grep, sed, etc.
Highly recommended if you like SQL and deal with delimited files.
reply