In both of your examples I could see the model becoming the default, with humans double-checking, if even that. If things go wrong as you describe, humans are pulled into the loop by the humans wronged (unless they give up before).
There is a company making money with auto insurance claims: https://tractable.ai/en/products
reply