Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Dealing with an AI will be much better than dealing with a civil servant, in most cases. There is a certain kind of person who becomes a civil servant and many of them will not only have a hostile attitude, but also make it their life's mission to try to do as much damage as possible in the pettiest ways possible to the people they "serve". Especially if you're of a sex, ethnicity or age group that they hate. Sometimes the same as theirs.

Letting citizens deal with their bureaucratic errands with an online form or portal instead of with a civil servant in their office has been an enormous benefit, in the places that offer this. An AI will fuck things up, being an AI, but it will not necessarily treat people with a hostile attitude and lie to clients to spite them. Unless it's programmed by civil servants, that is.



sort by: page size:

This already happens. It's actually bureaucracy, but it's rarely called that.

Try dealing with YouTube after a fraudulent copyright strike, or getting money from Amazon or PayPal after your account has been suspended for an arbitrary reason.

AI has the potential to automate bureaucratic corporate hostility and indifference to weaponised levels.


While awesome that they are solving the problems some people have and manage to make a business out of it, i think the need for such a service is a worrying civilizational warning sign. Same with the appearance of AI lawyers.

Eventually, these services will be countered by opposing services. We are getting ourselves into a bureaucratic arms race. Crumbling under the weight of it's own bureaucracy is one of the failure modes of civilisation.

We, humanity, have all of the tools and resources to solve the issue at the root cause. We just do not have the right system of incentives.


It is getting clearer by the day that humans will use AI for laws written for humans. And this will cause a lot of grief for the people on whom these laws are imposed.

In outright totalitarian countries government access to your agent will be mandated.

In more democratic countries we'll see lots of suits and court cases over getting access to these agents and assigning intent to your IRL actions based on your 'thoughts' with the AI.


I feel it's really problematic in a bureaucratic environment. AI won't have any compunction about making a false claim. But a bureaucrat has to stick their neck out to say, yeah no that's ridiculous. And also it's easy for AI make an authoritative sounding accusation and then the system to demand the victim be able to prove a negative.

You're assuming the AI lawyer would be worse than a person.

I am critical of AI replacing jobs that I believe are better done by humans but fear that economic pressure will net use the worse but immensely cheaper solution.

That said, imagine a justice system not barred behind immense costs. It is just as much a problem in free and open societies that legal rights cannot be enforced because it is just too expensive. Also legal abuse by the political or capital class where the process is the punishment.

Sure, some might be shocked that our rights might some day are laid into the hands of an AI. But it is worth a thought if the lowest form of sentient life is an AI or a lawyer.


Eventually, somebody will write bots to deal with this robotic bureaucracy and it will be matter of how much licenses and CPU power you have to negotiate a better deal. Essentially, it will be like rich people nowadays solve issues between themselves using lawers.

I don't have empirical evidence, but I think people are overstating the harm of opaque AIs.

On the one hand, opaque bureaucratic processes already exist today. It's not like the bank that refuses you a loan will tell you "oh, but we would totally have granted it if you were 20% richer and also not a woman, so come back after you've achieved that and we'll give you the loan"!

On the other hand, even opaque models can be studied and systematized. You can't run a simulation of a judge on 100 sample cases, then on the same 100 sample cases with names changed to sound like immigrants, and numerically measure the judge's bias. You can do that with a ML model, and compensate in various ways.


I could see an AI lawyer being a great assistant for human lawyers. I dislike the idea of an AI lawyer being appointed to me if I cannot afford my own representation, especially as the county NN isn't going to have as much resources as Kirkland and Ellis's NN that's been trained on all the cases they downloaded from Lexus Nexus.

I think a citizen making governing decisions on behalf of an LLM would be more realistic to work, legally. An AI-enhanced mayor, if you will.

Yeah yeah AI will now become your public defender option...

I would be more worried about AI enforcing laws that don't actually exist.

> It’s more likely that the AI would free up public defendants from things like having to answer their clients’ questions, as they could just ask the AI, allowing them to be more effective

Public defenders would be more effective if they didn't have to talk to their clients?


Can't AI eventually bring better strategy and judgement ? The face to face consultation process doesn't require lawyers either, but more of a social worker role who can act as a buffer for some people and the AI ?

Give me the option between lawyers and AI. I'll choose the AI, as I've been burned pretty badly by lawyers before. The degree's don't cause and effect competence.


I agree with the use of such tools in our personal lives, as liability is not as big a problem and the stakes are lower generally.

I only object to the use of AI in places where I have to put my signature and will be held liable for whatever output the AI decides to give me.

Like that Lawyer that made a whole deposition with AI and got laughed out of the room, he could’ve been held liable for court fees of the other party in any other country, and lawyers aren’t cheap! I don’t imagine his employers were very happy.


These AI's will emulate the behavior of our current system, so if such compassion and sympathy exist it will be preserved. Also, consider that the alternative is having a system where people are frequently bullied by people with deeper pockets to pay for good lawyers, which is an immense failure.

Being in the public eye is already miserable. The public eye appears to be a close relation to the Eye of Sauron. These people smile a lot, but they don't look like they're having much fun.

The existence of AI will make life a bit worse for some of them. But life being better or worse doesn't really reach my standard of 'more harm' here. I suppose what I was thinking is a bit like if someone cuts in front of me in a queue. I might be worse off, certainly could feel annoyance, but I wouldn't say I'm harmed. I just don't like it. For harm to have happened I would actually have to be ... I dunno, harmed.

Ms. Swift already has to deal with literal stalkers. The baseline of what public people are expected to put up with is already high. I'm not sure this thing is really moving the bar.

> I'd also be curious if you think...

2 thirds of that (bullying, miscarriage of justice) are already major problems. The legal system is already a crapshoot; they do there best but it makes a lot of mistakes. I could see the net effect of AI being helpful in both cases; I'm more excited about neutral, AI judges than I am worried about evidence quality dropping off.

Con artists, we'll see what happens. The major protection I've relied on to date has been rollbacks and balances rather than preventing fraud up front. Social engineering is hard to stop.

You can feel concerned about whatever you like, of course.


It can help if there are simply not enough workers available and it's used to boost productivity of the existing workers.

through in most cases it probably will still not work/help society

I.e. from what I heard public defendants in the US are notoriously overworked, if an AI could help them auto prepare all the simple straight forward cases they could do a better job. But then this is also a good example for why it probably still will not work out well. As no one will pay for that tool or they will pay for it but then expect so many more cases to be handled that it might get worse instead of better.

next

Legal | privacy