> An easy way to cover that is to have the script create a file, which you check at the start of the script and if present you exit.
This does sound a good solution on paper, but "checking at the start" and "creating a file" are two different steps (i.e. not atomic) and will cause trouble eventually if your system has the tendency to run the script twice. A better solution is to use the `flock` command before creating the flag file. For example,
exec 999>/var/lock/my.lock
flock -n 999 || exit 1
if [ ! -f flag_file ]; then
echo "script not ran before, running"
touch flag_file
else
echo " already ran, exiting"
exit 1
fi
# do stuff here
This looks like a cool technique, but what happens if the process gets stuck in a loop somewhere (maybe in third-party code, or waiting on I/O) where you can't put calls to the check method in? Maybe it'd be a good idea to check to see if the flag is already set in the handler, and immediately exit in that case. That way, hitting ctrl-C once would cause a graceful shutdown, while hitting it twice (which is a pretty typical reaction if the first doesn't cause a quick exit) would force-quit in emergencies.
Do you understand how the flag works? You can still condition on exit status. It's only if you have an 'unhandled' exit status that causes script termination.
I've read it, don't worry. So basically we create a problem with the use of -e flag and then solve it with traps... And what if our logic depends on exit statuses, for example when we check whether some utility or a file is present in the system? I don't want the script to exit, I want it to go another logic branch!
P.S. No, temporarily disabling the option is not a solution, it's another workaround for the problem created out of nothing.
Instead of handling non-zero exit statuses in a correct way,the article suggests to interrupt the script right in the middle, with probably some temporary files and processes hanging around which can't be cleaned up if something goes wrong.
The same BS goes though the entire article.
Has the author actually written anything bigger than echo "Hello world!" in Bash?
Also you can do it in a non-destructive, non-random way. Take `apt` as an example.
The easter egg is running it with an undocumented flag. The "error case" is the same as mistyping a command. Sure, you could argue the return code is 0 when it should be != 0 but I'd argue a) if you're using the wrong flags it's your fault and b) if you'd hit --help by chance it would also return 0.
> Only if the readability check expression is truthful/succeeds, we want to source the file, so we use the && operator.
Almost. The real way to do this is to check for the non-existence of the file as the “success” case and do the action via a || on failure.
Otherwise if you run in a strict mode with error on any unhandled non-zero command (set -e), you’ll exit the script with failure when the profile doesn’t exist:
[[ ! -r ~/.profile ]] || . ~/.profile
Note that the if expression does not have this issue as they don’t trigger the error on exit handling. Only the && approach does.
> It would be pretty inconvenient if the shell exited any time any program returned non-zero, otherwise if statements and loops would be impossible.
In another life I worked as a Jenkins basher and if I remember correctly I had this problem all the time with some Groovy dsl aborting on any non zero shell command exit. It was so annoying.
> For instance, when copying a file to Drive, the execution of the command will take as long as the upload process itself. Once finished, an exit status of 0 will indicate precisely that the upload was successful and the file is certainly on Drive.
I set failglob: `shopt -s failglob`. Makes the whole command fail if there's no matches. That combined with `set -e` which aborts the script in the event of any command failing makes me feel somewhat safe.
Indeed I add the following two lines to every bash script I write:
This does sound a good solution on paper, but "checking at the start" and "creating a file" are two different steps (i.e. not atomic) and will cause trouble eventually if your system has the tendency to run the script twice. A better solution is to use the `flock` command before creating the flag file. For example,
reply