The lawyer was just trying to outsource his job of plausible bullshit generation. He would have gotten away with it, too, if it weren't for the fake cases.
I don't know how else to get this message across, but it does this all the time in all subjects.
It doesn't just occasionally hallucinate mistakes. The mechanism by which it makes non-mistakes is identical and it can't tell the difference.
There is no profession where you a) you shouldn't prefer an expert over ChatGPT and b) you won't find experts idiotically using ChatGPT to reduce their workloads.
This is why it's a grotesquely inappropriately positioned and marketed set of products.
That's just the lawyer being clever -- it's obvious that claims made in a fictional context don't have the kind of weight he's ascribing to them. It's funny, though.
He probably accepted their claims at face value and assumed they’d be somewhat based in truth, and their subsequent filing would have some reasonable logic that supported those claims.
Their new issue is that no judge will ever give them an ounce of leeway in the future.
I don’t know why a lawyer would trade their professional reputation in exchange for a lawsuit they know will lose, but good on thedacare for finding that idiot.
reply