I am thankful to you for highlighting this part of the long paper. That completely wrong assertion (the input to sort is infile, nothing more, nothing less) casts the rest of the content in doubt (at least it does to me).
When the input is large enough, GNU `sort` writes the input in chunks to multiple temporary files, sorts the individual files, and then merges the result for output.
>Or reverse sort till the end of the file with !Gsort -r^M etc.
Right. Or flip the case of a section of text, even of an arbitrary block of lines demarcated by marks set by vi(m) commands like ma and mb and the like, by filtering it through tr.
Or filter another section through sed or awk or even through a Unix pipeline, to do whatever you want.
And with just a little practice, you can become fluent and fast with all of this, and it improves your productivity a lot, apart from being fun and creative.
"The unix command line ('cat foo.txt | sort | uniq -c | sort -rn') is wonderfully concise and powerful". And yet, contrary to the author's assertion, it can be made even more concise without sacrificing the power:
grep + sort + awk = unbelievable single person task management effectiveness. And it was right under my nose all the while.
I've dumbed my version even further, I just use numbers. every line starting with 1 is highest Priority and so on.
another important piece i needed was a pointer to where i last left off. I just use double underscores "__". next time just open and search for 2 underscores and start right off.
I like these command line tools, but I think they can cripple someone actually learning programming language. For example, here is a short program that does your last example:
I guess the author had to add the sort/reduce to "prove" his point...
reply