Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> An easy way to cover that is to have the script create a file, which you check at the start of the script and if present you exit.

This does sound a good solution on paper, but "checking at the start" and "creating a file" are two different steps (i.e. not atomic) and will cause trouble eventually if your system has the tendency to run the script twice. A better solution is to use the `flock` command before creating the flag file. For example,

  exec 999>/var/lock/my.lock
  flock -n 999 || exit 1
  if [ ! -f flag_file ]; then
    echo "script not ran before, running"
    touch flag_file
  else
    echo " already ran, exiting"
    exit 1
  fi
  # do stuff here


sort by: page size:

This looks like a cool technique, but what happens if the process gets stuck in a loop somewhere (maybe in third-party code, or waiting on I/O) where you can't put calls to the check method in? Maybe it'd be a good idea to check to see if the flag is already set in the handler, and immediately exit in that case. That way, hitting ctrl-C once would cause a graceful shutdown, while hitting it twice (which is a pretty typical reaction if the first doesn't cause a quick exit) would force-quit in emergencies.

Do you understand how the flag works? You can still condition on exit status. It's only if you have an 'unhandled' exit status that causes script termination.

I've read it, don't worry. So basically we create a problem with the use of -e flag and then solve it with traps... And what if our logic depends on exit statuses, for example when we check whether some utility or a file is present in the system? I don't want the script to exit, I want it to go another logic branch!

P.S. No, temporarily disabling the option is not a solution, it's another workaround for the problem created out of nothing.


What a nonsense.

Instead of handling non-zero exit statuses in a correct way,the article suggests to interrupt the script right in the middle, with probably some temporary files and processes hanging around which can't be cleaned up if something goes wrong.

The same BS goes though the entire article.

Has the author actually written anything bigger than echo "Hello world!" in Bash?


> "I have a backup job that is triggered by a timer. I want to know when that job fails so I can investigate and fix it."

This is really more in the realm of a shell script.

You could do this verbosely:

  #!/bin/sh

  /path/to/my/backup_job

  if [ $? -ne 0 ]
  then /path/to/my/failure_alert
  fi
...or, you could do this tersely:

  #!/bin/sh

  /path/to/my/backup_job || /path/to/my/failure_alert
The wrapper script would go into your timer unit. I like dash.

Is that behaviour not the point here? The script tries the first command,then the second if that fails,etc.

> It sounds like exit_signals() is being called too early

Or zap_pid_ns too late, yeah.


Also you can do it in a non-destructive, non-random way. Take `apt` as an example.

The easter egg is running it with an undocumented flag. The "error case" is the same as mistyping a command. Sure, you could argue the return code is 0 when it should be != 0 but I'd argue a) if you're using the wrong flags it's your fault and b) if you'd hit --help by chance it would also return 0.


The worst offender in this regard is bash’s `set` command.

For example, `set -e` enables the `e` option (exit script immediately upon seeing a nonzero exit code). Guess how to disable it? Yup, `set +e`


To avoid an bugs, you can add a `exit(0)` at the beginning of your program.

there might be no lines which contain this flag

Again, untested:

        #! /bin/sh

        timeout=5
        fifo=/tmp/discover-vnc.fifo
        rm -f $fifo
        mkfifo $fifo

        dns-sd -B _rfb._tcp >$fifo &
        pid1=$!

        while read line; do
          case $line in *_rfb._tcp.*)
            # do something here.. then test
            # if flag=3 and break otherwise
          esac
        done <$fifo &
        pid2=$!

        sleep $timeout && kill $pid1 $pid2 &
        pid3=$!

        wait $pid2
        kill $pid3
It's also rather ugly to create a named pipe for this purpose

Because it creates a file in /tmp? Who cares? The alternative is bash voodoo magic.. the same magic that gave us shellshock.


Yes, it's enough to check the exit code. The parent poster's being overly paranoid.

The only thing that checking the exit code won't catch is bugs in mktemp/bash, or bad memory / solar flare bitflips.


> We can simplify this using control operators:

    [ -r ~/.profile ] && . ~/.profile
> Only if the readability check expression is truthful/succeeds, we want to source the file, so we use the && operator.

Almost. The real way to do this is to check for the non-existence of the file as the “success” case and do the action via a || on failure.

Otherwise if you run in a strict mode with error on any unhandled non-zero command (set -e), you’ll exit the script with failure when the profile doesn’t exist:

    [[ ! -r ~/.profile ]] || . ~/.profile
Note that the if expression does not have this issue as they don’t trigger the error on exit handling. Only the && approach does.

> It would be pretty inconvenient if the shell exited any time any program returned non-zero, otherwise if statements and loops would be impossible.

In another life I worked as a Jenkins basher and if I remember correctly I had this problem all the time with some Groovy dsl aborting on any non zero shell command exit. It was so annoying.


> For instance, when copying a file to Drive, the execution of the command will take as long as the upload process itself. Once finished, an exit status of 0 will indicate precisely that the upload was successful and the file is certainly on Drive.

That is awesome. Very clever!


Seriously.

"As a C developer, I never check exit codes of child processes. We can just enforce it by ensuring child processes don't have bugs"


> So there is no way to abort a bash script if something like <(sort nonexistent) fails.

The process ID of the last executed background command in Bash is available as $!.

  cat <(sort nonexistent)
  wait $! || echo fail
gives

  sort: cannot read: nonexistent: No such file or directory
  fail

Exit traps for cleanup are a good idea in any case. Scripts can be killed by signals and whatnot as well.

I'm not sure what scenario you're imagining with your other concern. This does what you would expect:

    set -o errexit
    if [ -f somefile ]
    then
        echo "File exists."
    else
        echo "File does not exist."
    fi

I set failglob: `shopt -s failglob`. Makes the whole command fail if there's no matches. That combined with `set -e` which aborts the script in the event of any command failing makes me feel somewhat safe.

Indeed I add the following two lines to every bash script I write:

    set -exu
    shopt -s failglob
next

Legal | privacy