Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I suspect it’s going to be a discussion similar to the introduction of music sampling, followed by a lot of litigation, followed by a settling of law on the matter.

The interesting part is if AI will be considered a tooling mechanism much like the tooling used to record and manipulate a music sample into a new composition.



sort by: page size:

Is a definitive legal ruling even possible? Looking at music for example there are still many cases being fought over stolen melodies and that’s been a thing for generations.

I think it’s going to go a similar way with „AI“ - lots of court cases, lots of different and nuanced rulings converging on a general idea what the legal boundaries are but with many grey areas in between.


I think what’s clear is that this is an unprecedented type of use. I’m really interested in seeing how the courts rule on this one as it has wide implications for the AI era.

The future will be AI lawyers battling for rights of AI generated music in the style of deceased artists on behalf of AI media corporations at the expense of robotic listeners.

Before all of this, we'll probably see improvised bands of deceased artists playing together AI generated music in their own style, not to mention long dead actors appearing in new movies etc. AI technology is going to give law firms a lot of work in the future.


I think what's going to happen here is, almost every dispute will end in a court case, using AI (because it's cheaper).

It's going to suck so hard.


Has the exact issue of a remixing AI been tested in court? No. But everything even remotely similar has been deemed legal. Considering the legal and financial backing on both sides of the issue I expect it to go Microsoft's way even if it does end up before a judge.

And next thing you know, some AI will want it's say in court, and it will likely have a very convincing argument.

Arguments will be formulated by AI with another AI attempting to poke holes. You get a government appointed AI if you cannot afford one. This will kick off an arms race between plaintiffs and defendants. Legal companies then build moats around their bespoke AIs and it all boils down to a judge/jury voting based on a generated slideshow presentation (hopefully avoiding a miscarriage of justice /s).

What will happen is AI will be fed IP legal corpus as part of training. It will only show images that can't be linked (in a justice setting) to any particular work. Which will be interesting because it will produce media culturally alien but still appealing and probably addictive. It will literally be the engine of advancement of human culture. I'm not the conservative type, but I'm still slightly concerned as much as fascinated.

So ... is there going to be a follow up about the legality of such a conversation or is this just a cute prompt engineering instance found in the wild?

I am greatly interested in seeing the liability of mismanaged AI products


From the article, it sounds like this is already being handled through an existing legal framework. That it involves AI just reads like headline bait and maybe a dinner table discussion starter for how these could easily ramp up and become harder to trace.

Yup.

AI is gonna pour rocket fuel on this stuff. There's already a great deal of talk about replacing lawyers with AI.


It's the most analogous process we have; the courts may not end up treating them the same, but it's unlikely they'll say it has nothing to do with how AI gets regulated.

The idea of compelling an AI to testify in court is fascinating. I wonder if there could eventually be a civil case about this.

The more I think about it, I think it will (and should) turn on the extent to which "the law" considers the AI's to be more like "people" or more like "machines?" People can read and do research and then spit out something different.

But "feeding the data into a machine" seems like obvious infringement, even if the thing that comes out on the other end isn't exactly the same?


It can always get more meta.

For example, the AI tool that Microsoft's lawyers use ("Co-Counsel"), will be filing the DMCA notices and subsequenct lawsuits against Co-Pilot generated code.

This will result in a massive caseload for the courts, so naturally they'll turn to their AI tool ("DocketPlus Pro") to adjudicate all the cases.

Only thing left is to enter these AI-generated judgements into Etherium smart contracts. Then it's just computers suing other computers, and being ordered to send the fruits of their hashing to one another.


In the case of AI, it seems to be a legal grey area that's actively er... evolving. ;)

Your first point is not really up to the AI, but a legal question, right?

And about the other points, just wait a bit. They don't sound actually so unrealistic to me, maybe in the next 20 years.

About the second point, I think there is already an industry specializing on this.


What's your feeling those types of lawsuits will land? Probably many people are also facing product decisions today keeping in mind the potential precedence law in the future.

I kind of feel that in general with AI regulation, the USA will have to go all-in somehow. EU will delve into future obscurity, but this technology is too valuable and China is breathing behind's USA neck, so I expect law to continue to be lax around IP of data used to train AI models.


I'm really looking forward to the EU framework around "AI". It's definitely a better approach than having individual artists sue and get dismissed on technicalities (that don't even apply in most of the EU - e.g. in France, if you release something by default you get copyright on it, so the judge's reasoning couldn't apply here) and judges deciding based on their interpretation of vague laws crafted in an age when "AI" was little more than niche science fiction if that.
next

Legal | privacy