Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I haven't used it in a while, but it was very handy for copying fixed-record-length files into linefeed-delimited files, and vice-versa.


sort by: page size:

I use cat file|gzip -9c|base64 -w $COLUMNS To copy and paste files, I've even had to use rdiff/zstd with a dictionary.

Fun indeed.


Wow, thanks for the script. Surprising in its simplicity. I would have thought this use-case was popular enough to warrant specialized tools etc. Especially in the scientific community where they transfer large files.

Sometimes you just want to quickly edit (not analyze) a large file.

The original Unix systems were really memory-constrained, and the standard utilities for processing text files often couldn't handle "long" lines (usually over 2048 bytes, sometimes less). So the `cut` utility divided an input file up into separate files each containing a section of each line. To put the file back together, the `paste` utility would merge the sections of each line from those files back together.

Also works well on non-text files, similar to `strings`! I don't think it works as well, but can still be useful for quick checks.

Yep and the Shar command that created a bash wrapper round sections of uuencoded data, so you could email a file in segments and conveniently recompose and run it to get the file back, without needing Shar at the other end. Good times.

It could be useful for changes in CSV files or text files with table-like formatting where you want to commit the change in just one specific column of a row, but not the other. I could also see it for binary files which aren't structured into lines.

Yeah, it seems like the kind of command that you only need because of a quirk in how the underlying system happens to work. Not something that should pollute the logic of the command, imo. I would expect a copy-on-write filesystem to be able to do this automatically for free.

This is a pretty cool utility, I never really did like just creating a blank file, I'd rather use an explicit tool

I'll update the article.


that's why I wrote: be aware of the downside...

It's a mega convenient function if you want to slurp in a few files.


I can `tail -f` a file on windows. That's enough use case for me :D

You don't have to stop to copy from a log-structured file

It most likely converted incoming files whenever necessary.

Totally. C-x C-q on a dired buffer, and you can edit most of what's essentially an ls output as if it was a text file. C-c C-c to commit changes. I find myself using it very frequently for file management.

That's good. But single file could break on powerloss. I use sqllite. It's quite easy to use, not a single line though.

On a smaller scale, the "q" utility has been a boon for me in the handling of ad-hoc delimited data files.

http://harelba.github.io/q/

Really one of the best things I've discovered in the past 5 years. Saves so much work compared to doing stuff with sed, awk, and the like.


Thanks for that - I haven't tried it there...

Yup - single file operation was a design goal, but I bet there's a better way to get it working than the hack I used.


Yes, I use q all the time for slicing and dicing delimited files. The only problem I have with it is that the name can make it a littler harder to find if you don't remember the repo.

Since q will read stdin and write CVS to stdout you can chain several queries on the command line, or use it in series with other with other commands such as cat, grep, sed, etc.

Highly recommended if you like SQL and deal with delimited files.


It works just fine. And iterating through files is not rocket science, anyway; it's not hard to follow what is going on.
next

Legal | privacy