What do you mean by "unix -r custom"? It's not like "-r" is some kind of standard, and many commands or programs have other ideas.
For example, GNU cp uses -r or -R for recursive copy, but OpenBSD cp only uses -R. ls uses -R only (-r is listing in reverse order). scp uses -r only (-R is remote copy). rm allows both -r and -R.
But then, commands like mv or find don't support anything like -r or -R, they implicitly affect the whole directory structure. Bash has * (with shopt -s globstar) for globbing with subdirectories. Make doesn't support any concept of recursive subdirectory traversal.
The one that I did only a few months ago was something like
$ cp -r path/to/some/directory path/to/very/important/directory
$ (run some commands to verify copy did what I wanted)
$ rm -r path/to/some/directory path/to/very/important/directory
Of course, all I had meant to do was delete `path/to/some/directory`, but I just pressed 'up' in my history and switched `cp` to `rm`. Of course I hit Ctrl-C in an instant, but my FS was already hosed...
I've also found the CTRL-r shortcut quite helpful in Bash. Most of my commonly used directories and commands are near the surface this way, it works on almost everything without installation, and I'm usually in a directory where those commands would make sense.
Although, it doesn't really help if you need to jump to an obscure part of the filesystem.
Yes but for example when running from within RStudio, or calling from other scripts, the two aren't the same. Calling from other scripts you can do chdir() first of course, but my point is that you can't sensibly rely in your script on cd and script path to be the same.
No, that gets you the working directory, which isn't always the same (like, when running from RStudio, getwd() returns the RStudio installation path IIRC).
I was surprised to see `$()` missing from this (otherwise quite extensive) list. There are a few commands listed which employ it, but it absolutely deserves its own entry.
That and `readlink -f` to get the absolute path of a file. (Doesn't work on MacOS; the only substitute I've found is to install `greadlink`.)
And `cp -a`, which is like `cp -r`, but it leaves permissions intact - meaning that you can prepend `sudo` without the hassle of changing the ownership back.
I never see `lndir` on these lists either. It makes a copy of a directory, but all of the non-directory files in the target are replaced with symlinks back to the source while directories are preserved as-is. Meaning that when you `cd` into it, you are actually landing in a copied structure of the source directory instead of the source directory itself, as would be the case if you just symlinked the source folder.
Once inside, any file you want to modify without affecting the original just needs you to create the symlink into a file, which you can do with `sed -i '' $symlink`. There you have it: effectively a copy of your original directory, with only the modified files actually taking up space (loosely speaking).
Except when you pass `-r` and you pass a directory in place of the filename. But maybe it should have been `-r <directory>` and still no positional file parameter. The ship has been sailed a long ago though.
cp most certainly does copy directory structures. Just not filtered ones whereby just certain entries are arbitrarily selected from the source tree and only those are replicated in the destination tree.
The issue in the question is that the person has expanded, into the cp command line, a bunch of full paths, effectively like "cp a/b/c/file1 a/b/d/file2 .... dest" and wants those relative paths to be re-created under dest as dest/a/b/c/file1 and so on. Indeed, cp does not do that; it simply puts the specified objects file1 file2 ... into dest.
An option to create each object's relative path under dest would be useful, but it would be a pretty awful default behavior.
GNU cp has this option:
`--parents'
Form the name of each destination file by appending to the target
directory a slash and the specified name of the source file. The
last argument given to `cp' must be the name of an existing
directory. For example, the command:
cp --parents a/b/c existing_dir
copies the file `a/b/c' to `existing_dir/a/b/c', creating any
missing intermediate directories.
GNU Coreutils is in active development, unlike the Windows command line which is basically abandonware (as, to be even-handed, is the Unix (tm) command line.)
Being able to type something like "z ab g f" to reach a fairly deeply nested directory is almost akin to magic. I absolutely hate cd'ing everywhere, and often I'm cd'ing between the same couple of directories for a number of projects, so I feel like it helps retain my sanity. I've also written scripts to take advantage of it, such as a cp clone that doesn't require an immediate target. So I can cp a file (or number of files, or a directory), z-jump to a different directory and paste it there instead of laboriously typing out all the directory paths. I love it.
Ah, you fell into a trap already! Using foo/* like that is generally "weird", since you let the shell expand. It will, for example, miss files that start with a dot ("."). But not necessarily, it depends on what the shell does!
As for the "trailing /" syntax, I know what you mean, but once you've internalized it, it's easy.
Without trailing slash: Copy the directory (and its contents). With trailing slash: Copy what's in the directory. This makes sense, because with the / you say you want to go "into the directory" and copy then.
And the latter includes files starting with . and all of that, you don't have to worry about it, or about what the shell would do, it's all the directory contents.
So what you want for your example is:
rsync -a foo/ bar
rsync -r foo/* bar would do the same (wrong) thing as cp -r foo/* bar, for the same reasons, the syntax is the same in that regard.
Windows even has mkdir. I'm not sure if this particular line would work though (-p is the default behavior on Windows; I don't think it would understand the switch).
reply