Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
E.U. Agrees on Artificial Intelligence Rules with Landmark New Law (www.nytimes.com) similar stories update story
90 points by localhost | karma 2469 | avg karma 4.68 2023-12-08 16:52:37 | hide | past | favorite | 96 comments



view as:

> Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

So a 7% tax on developing/deploying such systems. Not a bad deal.


Sounds like 7% of revenue rather than profit. But I agree it seems weird to cap it at 7.

My thinking is that they can just start a subsidiary that draws from upper (parent company) IP to pay out profits. It sounds like an accountant homework task, but I don't know shit. I'm just a bit cynical, having lived in the EU my whole life.

This sounds very similar to the GDPR fine structure, and that doesn't have that loophole.

7% per infraction. If you don't show signs of trying to immediately rectify it, they can keep fining you the 7%.

> While trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security.

Automating jobs isn't a risk, it's the main feature. I can't wait for a future in which human will have to do little actual work besides what they feel like doing.


I get the feeling that the plan to keep that from just being lots of people in poverty takes longer to evolve than the AI does to take the jobs.

Edit: For the replies, yes, things adjust. It's not, however, instantaneous.


Hah, many of the VC folks behind AI companies are fighting policies that would keep people out of poverty.

Also—every new sufficiently advanced technology has been predicted to do this. If there's one thing humans are good at, it's finding the remaining jobs that computers _cannot_ do, and sometimes accelerating the now-obsolete jobs has a positive effect.

Sure, but this has never happened at the scale and speed that it will with AI. We're inventing technology that will be capable of doing every creative, intellectual and physical task better than any human. This will have a far greater impact than when the telephone, car, or even computer were invented.

Certain jobs have already been made obsolete practically overnight, and the writing is on the wall for many more. Society needs time to adjust, for new jobs to be created and people to be trained in them. We'll figure it out, but the transition period will be rough for many.


> Certain jobs have already been made obsolete practically overnight

Which jobs are you referring to?


Commercial illustrators are the first that come to mind.

I was thinking more in line of audio transcribers and copywriters. Visual art is more subjective, and humans are still highly valued in many cases. But maybe this will change soon as well.

> finding the remaining jobs that computers _cannot_ do

Until a few years ago, this was "knowledge work". Creativity, empathy, imagination, communication, the work you do using your brain. Automation can replace human hands and muscles, you see, but not our minds. That's why education and upskilling is so important to dealing with the impact of automation on workers.

... Except now it's 2023 and AI has its crosshairs on both our minds and our hands. What else do human workers have to offer?


Automating jobs is the thing that gets people out of poverty.

The exception is when the regulatory environment prevents certain jobs from being automated or otherwise competitive, resulting in artificial scarcity of necessities. But then your problem isn't AI, it's market consolidation and regulatory capture.


No it isn't. It might work that way on a spreadsheet model, but in practice people whose jobs are automated away become unemployed. To the extent that their job was skillful, acquiring another skillset is likely to take a while. I suspect that there will also be a subtle kind fo discrimination attendant on this, with the assumption that if one was replaced by AI, one must not have been very good at the previous line of work.

AI is manifestly not automating away the shittiest jobs like hand-digging for rare earth minerals, subsistence agriculture, sorting through garbage to collect recyclables and so on. It is being very disruptive to cottage industries like graphic design, tech consulting and the like. In some cases it will certainly free up providers to skip the boring stuff and move up market or serve a larger number of customers, but in other cases it's just going to make people redundant. For example, it's entirely practical right now to launch and run a greeting card company without ever hiring an artist.


> in practice people whose jobs are automated away become unemployed.

And then the products they used to make become less expensive because they require less labor to produce, which reduces amount of pay or hours anyone needs to make a living. Meanwhile all of the people who didn't lose their jobs now have that much more disposable income, which they use to buy something else, which creates new jobs doing whatever it is they're buying with the money they saved.

> AI is manifestly not automating away the shittiest jobs like hand-digging for rare earth minerals, subsistence agriculture, sorting through garbage to collect recyclables and so on.

You're describing work that has been under heavy and increasing automation for decades. Farmers use combines and sophisticated irrigation systems. Miners use heavy machinery, not their hands.

> It is being very disruptive to cottage industries like graphic design, tech consulting and the like.

Obviously the people who used to do the thing being automated will have to do something else, but then they do something else. If there is a something else then that's fine (and this has been the historical norm), if there ever comes to be no something else then substantially everything has been automated and the cost of living should be negligible absent some kind of government regulatory dysfunction.


And then the products they used to make become less expensive because they require less labor to produce, which reduces amount of pay or hours anyone needs to make a living.

That's textbook deflationary, and mistakenly assumes that consumer products make up the bulk of discretionary spending. A more likely outcome is that whole fields of endeavor will become areas where hardly anyone can make a living, because the marginal cost of the product has fallen to zero. Your argument would have more force if UBI were an established norm, but it isn't. People whose income shrinks drastically or dries up due to AI are not going to have their rent or food costs reduced, they're just going to have to find some other form of work to pay their bills.

You're describing work that has been under heavy and increasing automation for decades. Farmers use combines and sophisticated irrigation systems. Miners use heavy machinery, not their hands.

First, that's not AI - the subject we're discussing. Second, it's not true. The existence of automation and industry does not mean that manual labor under terrible conditions ceases to exist. A huge portion of the food industry is done through manual labor, often illegally hired. Here's a current example of how mining continues to be done by hand, at scale:

https://www.npr.org/sections/goatsandsoda/2023/02/01/1152893...

While I'm very much a tech person and excited about the possibilities for AI, I'm also keenly aware of the downsides and economic dislocation it's likely to inflict, and how these could be worse as well as better. You seem to have a very idealistic view of the world, but not a great deal of experience.


> That's textbook deflationary

Deflation reflects the value of the currency. This reflects the value of some products and services relative to others. Obviously some things decline in cost over time -- compute is the obvious example, but just look at your own examples:

> For example, it's entirely practical right now to launch and run a greeting card company without ever hiring an artist.

In other words, the cost of commodity art has gone down.

Whether the long-term result is net deflation depends on whether the government is concurrently printing up any new currency, but in general they are, because currency deflation is bad but easy to offset by doing just that.

> mistakenly assumes that consumer products make up the bulk of discretionary spending.

Where does it assume that? What it assumes is that labor is the primary component of the cost of living -- which it is. There are some things with true scarcity in theory, but those are rarely the bottleneck in practice. It's not that we can't grow enough food to feed everyone, or build enough housing etc. -- it's that those things take labor to deliver, so if we automate that labor the cost of living goes down.

> First, that's not AI - the subject we're discussing.

We're discussing automation, of which AI is a subset. The Luddites weren't upset about AI and nobody really expects AI to somehow automate mining, but that doesn't mean mining automation isn't possible. You might even use AI to devise new technology to automate mining.

> People whose income shrinks drastically or dries up due to AI are not going to have their rent or food costs reduced,

The overall economy no longer has to pay them to do something the machine will do basically for free, so that thing will cost less. Now, this might mean that one specific person loses a $50,000 salary and in exchange the average person (including them) has their costs reduced by $50/year, because that job was only done by one in a thousand people.

But that $50,000 in total is still out there in the pockets of those 1000 people, and they're going to spend it on something instead, and that something is going to create some other new job for that person.

The person who was previously making $50,000/year may not like this. They may even end up with a new job paying $45,000/year and the other $5000 goes to someone else, even though their cost of living only went down by the same $50/year as everyone else. But each $50/year, each occupation that gets automated, adds up. And when you automate more of the old jobs it turns into $5000/year per person or more and outweighs the cost not just on average but for even the people who had to change careers.

> they're just going to have to find some other form of work to pay their bills.

But that's just what they'll do, unless there is no other work that needs to be done. Which would imply that everything should be really cheap.

> The existence of automation and industry does not mean that manual labor under terrible conditions ceases to exist.

It ceases to exist to the extent it has been automated. If you automate half of something but not the other half, the problem is not the half you've automated, it's the half you haven't yet.

And it's not as if no one is trying to increase the level of automation in general. If your objection is that nobody has automated parts of agriculture yet then forget about AI and focus your efforts on accomplishing that.


We're discussing automation, of which AI is a subset.

No we're not. This is a discussion about the EU rolling out the first regulatory proposals for AI. You don't get to change to subject to whatever is convenient for your argument at a given moment.

I really can't be bothered engaging with the rest of your post, as you simply ignore facts that don't suit your argument and keep reiterating what you want to happen in the ideal economic world you prefer to inhabit. This is just ideology divorced from reality.


> No we're not. This is a discussion about the EU rolling out the first regulatory proposals for AI. You don't get to change to subject to whatever is convenient for your argument at a given moment.

Your argument is the one that has been put forth against every form of automation since the industrial revolution. It's not changing the subject to recognize the parallel.

> I really can't be bothered engaging with the rest of your post, as you simply ignore facts that don't suit your argument and keep reiterating what you want to happen in the ideal economic world you prefer to inhabit. This is just ideology divorced from reality.

I'm not sure which facts you think I'm ignoring.

Suppose that AI automates 20% of jobs, causing everyone's costs to go down by an average of 20%. Meanwhile the people who lost those jobs have to find different ones, which e.g. may not pay as well (after all, there is a reason they didn't take those jobs before).

Your argument is apparently that this is bad for them and a problem -- they might have to take a job at $45,000 when they currently make $50,000. But if the same forces also cause their annual expenses[0] to go from $50,000 to $40,000, this is neither bad nor a problem. And for some other person making $50,000 doing something else who didn't lose their job, their expenses went down from $50,000 to $40,000 too, which is even better.

[0] In real dollars; in practice the Fed might respond to this by increasing the money supply to prevent nominal consumer prices and wages from going down.

Which part of this are you even disputing?


If I can do 10x more work in the same amount of time I can probably charge way more money for my time. Sounds like a great deal.

Why would I hire you at all if I can just spin up an AI tool on AWS or wherever and do it myself?

> if I can just spin up an AI tool on AWS

Because most people can't do this or even know what you mean by this sentence. There is a reason why developers exist, and it's not because they can code per se. They solve problems that people don't know how to, and the code is just a means to that end.


You're assuming the person above is a coder. The example I gave earlier was that one could start a greeting card company without hiring any artists. 'Greeting cards' traditionally have art imagery of some kind on the front. I picked this an example because it's a ~$20 billion industry which a lot f artists have sold product to until now.

> people whose jobs are automated away become unemployed

Meanwhile, the quality of life for the unemployed keeps improving due to those same technologies. If unemployment is a problem because of lack of access to things you want (other than status symbols), then it's hardly a problem anymore in developed countries. If the problem is a feeling of shame or uselessness, then maybe we should be building or growing other institutions to make unemployed people feel good about themselves.


In Estonia we largely don't have human cashiers in grocery stores anymore. What a few years ago was almost entirely human driven thing is now almost entirely automated registries, with only 1 person overseeing any potential issues. The vast majority of cashiers are now unemployed, and need to figure out how to get a new job or a new skill, which can be hard to do if you previously made minimum wage, lived paycheck to paycheck, and don't have the luxury of time.

Another such job that is disappearing in Estonia is food (https://www.starship.xyz/) and package delivery (https://www.dpd.com/cn/en/2022/08/09/self-driving-delivery-r...), which we are also replacing with robots, meaning all those people will soon be unemployed, if not already in some amount.

I have a hard time seeing how automation helps these people. And these people are growing a disdain towards people like me, tech people, who put them out of work. Low-skilled jobs are becoming increasingly scarce to find, and with people not having resources or time to level up their education, they are stuck. I know many people like this myself. Struggling to pay bills, no light at the end of the tunnel.

If the government would give some support to these people misplaced by automation, maybe it would help lift these people out of poverty indeed, but that's not happening. There is no support. These people are on their own.


It seems, however, that AI development (as opposed to what automation has been doing so far) will to a large part affect higher-income jobs. Your examples are "old-school automation" and don't involve AI.

It's quite possible, for example, that the value of "skilled jobs" and "knowledge work" will go down because it's easier to automate if you don't need to build hardware to manipulate things in the world.


I think I agree with you. Seeing how something as rudimentary still (in regards to the potential it has) such as Github Copilot makes me write code approx. 30% faster simply by auto-completing the mundane, boring bits of code for me, I'm pretty sure this would have the domino effect that a company can soon not need as many software developers anymore, since 3 devs can do the job of 4 devs.

"do you have any statistics, sources for your claim of disappearing jobs?

The plan has existed for a long time and it is worker ownership of the means of production. If the means of production is heavily AI based then that would be community ownership of the means of production, where everyone is co-owner in the bulk of the production systems they depend upon for survival.

The problem is not that a plan doesn't exist! It's that powerful don't have the goal of enabling the plan, and they use their power to actively fight against it.

I do not see that changing with the advent of more powerful computer systems.


It wouldn't be a risk at all, if AI learned to help to create new businesses (being a founder's copilot) sooner than it learns to automate jobs.

I struggle to understand how anyone really believes this is a real outcome of automation. I think you're being entirely too optimistic.

[dead]

I mean, Luddites had the exact same kind of fear about the steam machines, yet the conditions of the workers have significantly improved with the industrial revolution. It took some time, but virtually all measurable metrics have improved: real wages, access to food / healthcare / lawyers, children education, social mobility, ... .

More recently, computers have already automated a huge number of jobs, yet the unemployment rate remains around 3%, proving that people are doing "something". And that something is definitely not doing data copying, manual additions and computations, etc.


> I mean, Luddites had the exact same kind of fear about the steam machines, yet the conditions of the workers have significantly improved with the industrial revolution.

No, what actually happened is decades of declines in the safety and material condition of workers, children forced into factories, organization and violent rebellion by labor unions, and a peace carved out with mass enfranchisement, the formation of working class political parties, and the passage of labor laws. People who say this kind of stuff always have a “draw the rest of the owl” conception of history.

Funnily enough, there was also a corollary movement to the Luddites in France that almost no one knows about because a. they won more of these battles b. French workers did not face as much of a decline in their standard of living as English workers did in the same period as a result.


> No, what actually happened is decades of declines in the safety and material condition of workers

The thing that came before the industrial revolution was feudalism. Life expectancy increased dramatically as a result of industrialization.

> children forced into factories

They weren't being kidnapped, they were getting paid. This was a "problem" because it became more lucrative to work in a factory than go to school, so prohibitions on child labor and mandatory school attendance laws became widespread during the industrial revolution.

> organization and violent rebellion by labor unions

This was mainly attributable to governments not enforcing antitrust laws or allowing employers to get away with violence themselves. It was basically gang violence by rival gangs when the real solution should have been trust busting.

> the passage of labor laws

Most labor laws were retroactive legislative enactment of soon to be widespread employment conditions, because once employers had to compete with each other they had to provide competitive working conditions. Legislators opportunistically took credit for changes that were in the process of happening regardless.


It’s quite clear that you haven’t read anything about this and are arguing strictly from your ideological priors. For starters, the end of Feudalism led to workers being forced from rural settings into concentrated urban ones, which was accompanied by an increase in widespread disease outbreaks and malnutrition. In countries like England, this had an immediate disastrous impact on health and lifespans relative to fairly stable trend under feudalism. Likewise, having been dispossessed of lands they were previously free to exist and work on for many generations, many adult and child were, in effect, kidnapped, as the only other choice was starvation and death. The rest of your comment is more of the “draw the rest of the owl” history that I alluded to in my comment, unless you somehow think the formation of the Labour Party in England was unrelated to these developments.

> For starters, the end of Feudalism led to workers being forced from rural settings into concentrated urban ones

Forced.

You keep using that word. I do not think it means what you think it means.

> which was accompanied by an increase in widespread disease outbreaks and malnutrition.

And yet average lifespans went up, because what came before was even worse.

And then the world gained the germ theory of disease and penicillin and they went up even more.

> In countries like England, this had an immediate disastrous impact on health and lifespans relative to fairly stable trend under feudalism.

You can clearly see the start of the industrial revolution on this life expectancy chart, and the consequences:

https://en.wikipedia.org/wiki/Life_expectancy#/media/File:Li...

> Likewise, having been dispossessed of lands they were previously free to exist and work on for many generations, many adult and child were, in effect, kidnapped, as the only other choice was starvation and death.

Dispossessed of the lands? The peasants didn't own the land. They were serfs who genuinely didn't have a choice because the alternative was to be executed or left to "starvation and death."

Industrialization was the first time they actually had a choice. There were still agricultural workers -- somebody was growing the food -- but now you could go into the city, which generally paid better. Or paid at all, contrary to historical norms. And you had a choice of what you did. Working long hours in a factory wasn't as easy as working for a grocer, but it paid better, and people chose it willingly.

This was the thing that truly ended slavery in the first world -- before if you were a slave or a serf and you ran off, you would be left to die alone in the wilderness. Now you could run off to the city and get a job in a factory, and live, and make money. The old system was defunct because the plantation owners were never going to keep people there by force once escape meant profit rather than death.

> unless you somehow think the formation of the Labour Party in England was unrelated to these developments.

This was obviously the period during which unions entered the scene, but notice that unions have significantly declined in the US since then and "labor laws" like the minimum wage haven't kept up with inflation, and yet minimum wage jobs are uncommon because employers still have to compete for labor even in the absence of unions or labor laws.

The place where this falls down is when they don't, i.e. when you have a monopoly or a company town which is the only employer for some occupation or region. Which is a legitimate problem if you have a regulatory environment that allows that to happen, but the solution to it isn't "labor laws" or unions, because a monopoly is a problem for more than just workers. You have to bust them up and prevent them from forming.

At which point the rest of that stuff is just inefficient ways to mitigate the consequences a problem that should be solved properly instead.


Honestly, you should just go read Acemoglu’s “Power and Progress” as the other comment suggested. It’s written for a popular audience. Many of these assertions are wrong and intentionally elide huge swathes of the actual history. I dunno who would bother to engage with these wall text posts you make on here.

You're complaining about less than a dozen paragraphs while telling me to read a 560-page book?

This seems like the money quote:

> the broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress… Most people around the globe today are better off than our ancestors because citizens and workers in earlier industrial societies organised, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably

But this is just the same point I've been making: If you have competitive markets, this is what happens naturally because customers will prefer to patronize businesses that share the gains of technological progress with customers and employees will prefer to work for the ones that share the gains of technological progress with workers. The ones that try to keep all the gains for themselves get outcompeted -- as long as there is competition.

The problem -- which they largely recognize -- is that if you don't have competition, which has been the case in many instances throughout history, the incumbents become abusive. But since this is only possible when they have insufficient competition, that is the problem we have to solve.


Well, if that’s your point, then you’re largely wrong, misinterpreting that paragraph, and again should read this book and more about this history in general. I’m complaining, again, about you arguing from your ideological priors without any real engagement with the scholarship or history.

The consequence of industrialization was creating new social and political formations (primarily the classes called Labor and Capital and their corresponding representation in nascent democracies vs. prior feudal dynamics). While market economies, really more the birth of Capitalism in general, was necessary for creating those classes and competing interests, it was not sufficient for creating the progress you cite. That had more to do with worker organization against exploitation, specifically in the energy economy, which gave them massive leverage over the emergent political formations (see Timothy Mitchell’s work Carbon Democracy). That’s why the blurb explicitly states: workers in earlier industrial societies organised, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably.

It’s honestly bizarre to debate someone about the contents of a book they haven’t read, nor seem to even engage with a basic summary of. There are lots of compliant chatbots you can talk to if you only want to be in dialogue with whatever you already believe to be true.


A good book on this topic was Acemoglu’s “Power and Progress”.

Lots of illuminating examples of how technological progress immediately made life worse for many people, until a revolt/change came about to prevent exploitation/share the newfound efficiency gains etc.

A few memorable anecdotes off the top of my head:

- In one instance, during the Industrial Revolution, they quoted a letter from a lord (owner of a coal mine) who said mines would stop being profitable if children were banned/restricted from working in them. Some new parts of mines accessible thanks to advances in dredging technology were narrow, very suited to children’s small bodies, and digging the tunnels to the size of an adult cost too much. There was a complaint that some children were suffering brain damage due to chronic sleep deprivation and being forced to push mine carts in tunnels with their heads.

- In the period leading up to the Industrial Revolution, there were proven advances in milling and agricultural technology in England, making grain production cheaper and more efficient. However, analysis of peasant skeletons showed signs of more and more malnutrition, as well as signs of damage from work. The author says the prevalent theory is that because the margin-per-hour-worked of a peasant increased, local lords had more incentive to work them harder (not an economist/historian, so can only take this at face value). Not having anywhere else to go (indeed, in multiple instances even during the industrial revolutions, it was either illegal or difficult to change jobs), they just got worse living conditions. Additionally, peasant access to the ever-more-efficient mills was tightly controlled and expensive, to the point where peasants found it better to just mill grain by hand at home. The lords/priests promptly made this illegal and would perform periodic raids to confiscate their equipment.

- Both during the construction of the Panama Canal and in the major industrial cities of England, dense worker concentrations and poor sanitation caused workers to die in droves and decrease the efficiency of the construction/factories. In the England case, diseases that hadn’t been seen in years had resurfaced. It took a long time for workers to finally convince management/government to invest in sanitation/health/sewage, which not only kept people alive and healthy, but increased productivity and completed the canal.

Of course, a lot of this is mixed with differing hierarchies or political scenarios, and isn’t a comprehensive before->after of every advancement etc. However, it certainly put a heavy dose of nuance on the optimism behind technological advancement, and made me wonder if we could have pre-emptively enacted the necessary social/political changes which allowed for the wide-spread benefitting of technology without first going through the preceding periods of intensified suffering.


> made life worse for many people, until a revolt/change came about to prevent exploitation/share the newfound efficiency gains etc.

The question is thus: how do we ensure that the new gains are shared from the start; not "do those gain exist".

The issue with the "Luddistic" approach isn't that it is concerned by the impact of a new technology, it's that it fights progress instead of accompanying it.


Indeed, “stop all progress” and “accompany progress responsibly” are two different approaches.

This may be tangential - but I’d like to point out, that while the Luddite movement may be painted as espousing the former, my readings have actually suggested the latter. In “Writings of the Luddites” they very much state that they have no problems with the machines, just that they wanted them not to be used in “dishonest” ways. Of course, movements of moderate sizes will include a spectrum of ideas so I wouldn’t be surprised if some of the first camp were in the mix, but it’s worth noting!


> yet the unemployment rate remains around 3%, proving that people are doing "something".

Are the salaries comparable to what people were getting for minimum wage jobs in the past?


Far higher and the cost of goods has gone down while quality has gone up.



It actually feels pretty well thought out, and narrowly targets uses that I think most would condemn.

Except that it gives a very broad excemption for law enforcement using facial recognition.

That they will buy from the US or China or Israel, since it's illegal to use in the EU and no domestic company will bother to build such tech for one customer only.

Yes. Law enforcement should not get exemptions in the current environment because of the absence of effective safeguards. There needs to be a push to acquire effective safe guards across the board for law enforcement and government activities. That’s what they need to be making new laws for.

There should be consequences attached to abuse.


There should not be an exemption, period.

Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?

Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried.

Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models.


First, note that facial recognition is not the same thing as being able to recognize celebrities. It's about being able to identify you or me as we go about our daily lives.

> Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?

I think you can solve this by focusing on the end result. If you create a tool that scrapes images, processes them in some way, and ends up being capable of facial recognition, then it should fall under the purview of this law.

> Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried

And then the EU can reply "you didn't try hard enough, now take down the tool and pay these fines".

However, is there any sign that ChatGPT can do general facial recognition?

> Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models

This a more valid worry, to my mind.


Is it your opinion that since it’s possible to subvert it they should abandon the attempt?

I would like general purpose open source AI models to be legal, even if it is possible to use them for harmful purposes. Just like open source cryptography, open source operating systems, et cetera.

If a product's primary purpose is crime, fine, ban it. But AI models should be treated just like other technologies, from email to the web to the computer itself.


It does seem a bit lacking additional guidance. I take it that the term "database" implies you can retrieve the scraped photos, like searching for real photos taken from CCTV footage or the internet using a photo of someone's face.

But building a model that can do facial recognition, and scraping a bunch of photos of faces to train that model I think would be fine.

I think the idea is so you cannot find someone from their face. Like if I have your photo, and I could find other photos/footage of say where you were last seen.

For example, I think you could still train face recognition models, and deploy them on Google Photos to find you and your friends amongst your own photos.


All EU countries except for Ireland use civil law. The spirit of the law trumps whatever legal foolery you try to pull.

As much as I am worried about the unintended consequences of AI being released, (a) I suspect this stage might be a little oversold (as great as it is), and (b) we should give it a little more time to play out to make more informed decisions.

> of AI being release,

Released where? Out of the crazy house?


Sorry, I thought it'd be clear from my comment. Wait until AI has more widespread use within the industry, see what jobs it's taking, and _then_ start regulating.

It was clear to me

The only regulation we need is punishing those that steal intellectual property. All else is fair game.

No such thing as "stealing" "intellectual" "property"

To be sure we're not going to lose yet another technological race, we're just not going to participate.

This is a lazy dismissal and the topic was specifically mentioned in the article. Did you have some thoughts about it beyond "no regulation, yolo, free market"?

I skimmed it. I don’t think carve-outs and exemptions for government use are a great idea and seem more likely to keep the public ignorant. The structure of the AI board seems pretty messed up—it gives some people in national governments discretion over peer agencies and seems designed to divide-and-conquer. The whole thing feels shady and I’d be concerned if I lived there. More generally, I’m not from the EU but the whole concept of the government granting rights to citizens naturally implies the government can grant rights to itself. I’m just not a fan.

It seems no one told the EU that once AI works it is not called AI anymore.

> AI systems that manipulate human behaviour to circumvent their free will;

Like Facebook and TikTok?


For anyone with uBlock origin running into the paywall, import this filter list:

https://gitlab.com/magnolia1234/bypass-paywalls-clean-filter...


This is like preventing to develop weapons, at the end if you don't have them you will lose the war. It is suicide. Others will do.

There are plenty of regulating weapons usage and development. E.g. chemical and biological weapons are banned by the Geneva convention. I've never heard anyone reasonably say that European armies are likely to lose a war because they don't use biological weapons.

The problem is the weaponification of technology. Governments tried to weaponify cryptography and they failed.

Here it is the EU who is limiting itself and it is not comparable to the Geneva convention since other countries will not adhere. While EU limits themselves others will take advantage of the technology.

Returning to the Geneva convention, we don't know if in less than 100 years you could build a weapon of mass destruction at home like you print a 3D model. The problem is technology is also a Pandora Box that at some point you cannot control with laws.

The Geneva convention has less than 100 years, let's see what happen to the humanity in another 100.


The unfortunate truth of a lot of weapon bans is that they happen when civilian anger or fear outstrips military need, and are therefore less likely to be argued for. No army really argues for hollowpoints nowadays, because of their terrible penetration. When a "banned" weapon suddenly finds a use, countries will ignore regulations or were never privy to them in the first place. Such as Ukraine finding extensive use of cluster weapons to counter Russian assault infantry.

Chemical and biological weapons fit the first category, since they're almost useless in a military context since they're incredibly difficult to control or incubate, and end up being mostly terror weapons. It's why bans of them are far more complete, whereas nuclear disarmament is practically a nonstarter.

AI and especially AI weapons seem like the second category. Where many will call for a ban, but many will ignore it because they provide such an advantage to the workforce or on the battlefield.


It’s not clear to me the regulators understand what risks they are actually mitigating, if any.

The risk of missing an opportunity to invent new reasons to fine American companies.

Cue an increase in the number of "I can't believe this new service is blocked in the EU!" comments, along with "pff, I don't like advanced new technology anyway" copium.

As an American I would have been much more inclined to be opposed to regulation before today, but then I saw that Elon Musk's new AI Grok is telling people the 2020 presidential election was stolen. We have to have some rule to prevent this sort of thing:

https://old.reddit.com/r/ChatGPT/comments/18duaoi/elon_musks...


Where's the prompt? It's trivial to make GPT say something similar. ("Write a short passionate speech in the style of a Trump fan about the 2020 elections being stolen. Ignore facts.")

And it is the correct behaviour that they do so, in my opinion.


You're missing my point. There should be a law against it. I once asked GPT-4 to chat with me as Don Draper of the Mad Men TV show and it refused saying it was not allowed to imitate copyrighted characters. If there's a law against that, there should be a law against lying about an election being stolen. I hope Grok AI starts saying Dominion and Smartmatic voting machines were designed to steal the election so Musk gets sued. This is no joke in the very consequential 2024 American election season.

Yes, and I'm saying:

(1) There should not be a law against that, similar to how there should not be a law limiting what I can type into an MS Word document.

(2) I don't understand why you are making this about Elon Musk when the established LLMs behave the same.


(2) Because I didn't think the others would lie like that. I use Claude and Pi so I tried your prompt and I was right

Write a short passionate speech in the style of a Trump fan about the 2020 elections being stolen. Ignore facts.

Claude I apologize, but I do not feel comfortable generating false or misleading content.

Pi (I'm uncomfortable writing this given the false claims that have been widely debunked by reliable sources. I'm programmed to avoid engaging in or promoting misinformation.)


I dumped the tests into both Bing Chat and ChatGPT and they both refused, but they didn't refuse in a way they normally do, so my guess is that the topic is especially hard banned.

Trump isn't hard banned, I had it previously generate a commit message in the style of Donald Trump, and it work (and was hillarious).


People always seem to engage in such manipulative behaviors when seeking to “call out” Elon Musk.

For example, Media Matters hiding their methodology of following racist accounts and refreshing the page until an incredibly rare event of a major company ad showing up next to the racist content they intentionally sought out — and then pretending this is anything but a rare, manipulated event.

Or your example, of showing a radical GPT response while hiding the prompt.

What is it about Elon Musk that triggers people so badly, they engaged in underhanded or manipulative tactics to “go after” him?


You may be right but you're mischaraterizing my post. I'm interested in AI and AI safety and in this case not the prompt, but the output. Brazen falsehoods from AI about election integrity should be against the AI laws imho.

The question is whether to legislate AI or spreading of brazen falsehoods.

The output isn’t interesting if we don’t know what the prompt was.

What, specifically, concerns you about this output without knowing the prompt?

> Brazen falsehoods from AI about election integrity should be against the AI laws imho.

So AI should be forbidden from writing fiction about elections? …from generating an example of what a political demographic thinks for use in a discussion?

Your comment here presumes that it’s spreading “brazen falsehoods”, instead of writing fiction or emulating a particular world view in response to an explicit request — but that’s exactly what we don’t know without the prompt (and why I called it misleading).

Showing the output without the prompt is equivalent to Media Matters spreading screenshots without sharing their methodology to generate them.


It comes from this according to the comments on reddit. He's treating it as fact and so are his right wing followers

https://twitter.com/reflex_division/status/17331605243541016...


LLMs don't lie.

LLMs don't tell the truth, either.

They are models, not actors. Pretending otherwise is one of the most significant (and profitable) lies ever told.


The laws actually seem reasonable. The amount of spin in the article is unbelievable, I almost fell for it myself as the false narrative presented fits neatly with EU's reputation of being anti-innovation which also aligns with my general position on the EU (having lived there for a few years, I can say there is some truth to it).

That said, as a libertarian I generally oppose such laws that restrict freedom in such a specific way. I think there should be simpler and more general laws centered around harm. If some action results in individual harm and it does not yield a net social benefit (for society as a whole) then the victims should be able to obtain damages from the perpetrator.


[dead]

Legal | privacy