The definition of a self-serving “study.” Also a good example of corporate hubris.
Close to 100% of people in the developed world were “impacted” and “affected” by everything from television to cell phones. We didn’t all lose our jobs. Get a grip.
yeah man. for sure AI is gonna change roles and make some jobs redundant. But it will create replacement jobs, like literally every other technical advance in history.
The study didn't say they will all simply lose their job? Did you just read the headline and assume "80% of jobs will be impacted by GPT" means they lose their jobs?
Impacted doesn't mean completely automate either, it can just be assisting, improving efficiency, better quality, etc. Of the 80% impacted they said GPT can help with only 10% of their work activities on average. So unless the other 90% of their job was doing nothing and GPT isn't automating the 10% I don't think they'll have to worry too much.
You know, we don't have to release an AI that obsoletes a large number of people and creates previously unseen levels of unemployment and social unrest.
It only needs to be released by one entity. Better to figure out what to do when it inevitably arrives, rather than trying (and failing) to avoid the eventuality.
OpenAI hypes its own product. I can't even begin to wonder who at OpenAI thought they needed to create their own marketing here. Really some high level intelligence on the marketing team over there.
And in that case now AI is showing how little innovation it creates by running an abysmal marketing campaign trained on how humans a majority of corporations have historically run campaigns.
>The occupations with the highest exposure include mathematicians, tax preparers, writers, web designers, accountants, journalists, and legal secretaries.
I'm surprised to see "mathematicians" here, but what do they refer to? Mathematician as an academic position? As teacher? As consultant?
it means that the writers recognize the topic but have not penetrated into the actual practice as it relates to economics. An analogy would be to say "English spellers are affected" because of spell checking software. I agree that including that raises more questions than it answers.
Most professional mathematicians work at universities. Even at well-ranked R1s, professors spend a non-trivial amount of time teaching and advising. The ones who do spend a massive amount of time doing mathematics are the least likely to be impact by GPT-like technology because of the nature of their mathematics.
And, more to the point, mathematicians are often doing that work because they enjoy it. Even in the best case for GPT, with respect to most professional mathematicians, you've done the equivalent of automating away their Sudoku/Crossword time.
You might be able to replace some graduate student labor or post-doc labor, but a huge reason for doing that labor in the first place is to train new mathematicians... it's like the grad school version of automating a second grader's multiplication table: you can do it, and you are "automating" the second grader's labor, but you've kinda missed the point...
I wonder how many of the other categories are similar, where the tasks that are being automated are sort of pointless or even counter-productive to automate. Because automating those tasks disrupts the learning process between the professional and their customer, and that learning process is actually the primary product. Legal secretaries and accountants in particular come to mind. Even a lot of web dev work.
Reducing work for humanity is a net positive under two conditions:
1) the profit is distributed to as many people as possible (say UBI)
2) we find things to fill our time with apart from work.
Problem one is likely solvable over the long term. Problem two may become a challenge for many. Work often adds meaning to peoples life. Feeling like you can't contribute anything meaningful to society may effect mental health. On the other hand, we may find time to take care of younger and elder people; spend more time with friends etc.
I think a big question will be, how fast the changes happen. A quick destruction of jobs without 1) or 2) will lead to negative consequences in the mid-term.
> Problem one is likely solvable over the long term.
It's not in the interests of capital to solve it.[0] Having high unemployment means cheaper labour. People have argued that capitalism should be replaced for more than a century, yet nobody's successfully displaced it.
[0] Obviously mass unemployment is bad for the economy in general, which would also ultimately impact capital, but for some reason that always seems to be a lesser concern.
Once there is AI-powered automated law enforcement to put down any resulting unrest, it's over. There will be no going back from the class divide at that point.
The rich will be living in their towers with the lucky few allowed to serve them and get paid a pittance, while the rest of us are starving in the streets.
It doesn't matter whether it is in the interest of capital. It matters whether it is in the interest of Democracy. Hundreds of millions will vote for the party, that promises to distribute these profits.
There is no single capitalism but many different flavors (US vs European welfare states).
The US and many other places have shown that you can quite reliably get people to vote against their interests if you can convince them that it would also benefit some group they don't like.
But fewer consumers of your products who are also earning less which means you can’t make up for it via higher prices. Taken to the hyped extreme AI means total economic destruction for labor and capital. In fact, unless it achieves self generation and maintenance of all supporting infrastructure and industrial inputs before this occurs then it’s the end of AI.
> fewer consumers of your products who are also earning less
This is a consequence of potentially collective, uncoordinated action. It's possible that every individual company says "it makes no sense to be paying this much for labor, i'm getting cheap AI/robots" and fires a ton of people everyone else be damned, and then runs into lower demand later because everyone else is doing it. So everyone acts in their immediate interest to be collectively worse off, not unlike... a bank run!
So it seems reasonable to be pessimistic about a potential beneficial outcome, given these conditions.
The quest for meaning is perhaps the lifeline for Meta's VR ambitions. If there's nothing left to do in the real world, people will get lost in endless generative content.
Humanity ends with people playing VR farming simulator.
If you could live in a Matrix-style creche it kinda makes sense to do it: you're physically safe, you're already in an intensive care unit if you ever get sick, and you can explore an unbounded world of experience...
I could see a future when the bulk of humanity lives in "the Matrix" voluntarily, eh?
I think video games already are this for a lot of people. I realized this while attempting to clear all the question marks in Skellige in Witcher 3, and I watch the games my nephews play and there is just so much grinding for loot. It really has in a lot of ways replaced having a part time job for them.
If billions of people chose to do nothing but barbecue and watch TV for their entire lives because we automated the entire economy, I would be 100% okay with that.
And once the people who are BBQing all day have done enough of it, and realize that it's no longer fulfilling to them, they'll find something else to do. (And some of them won't! For some, they'll feel joy in BBQing all day for the rest of their lives, and that's fine!)
For some, that other thing will be another leisure activity.
For others, it will be something like what we now see as work in one form or another, whether that be helping people in ways that robots and AIs can't, doing research, or even just doing some work that robots can do, because some people like to do it, and some people like to have a human doing it for or with them.
For others still it will be art: painting, writing, creating video games, becoming an actor or comedian, whatever.
Humans are not well-adapted to having nothing productive to do. If a given human wants to continue to do purely leisure activities, it is often because they have been so exhausted and abused by our current system that their body and brain needs the time to heal.
And all of art before they became commercialized. It will be a boon for artisnal production, entertainment, and culture. Which will help fill a gap for those less motivated without guidance/work either helping work on it or engaging with the art.
this intro is like "blue sky" social thinking.. good on you to have broad ideas in the abstract BUT.. evolution is based on things that really exist now.. the same applies 100 years ago or 1000 years ago.. The winning modern human populations have mostly killed opponents, enforced order in hierarchical ranks, shape the public media to reinforce order, and accumulate weapons, wealth and build security technology. Other civil people and wildlife have had to cope or die off, as there are few wild places on Earth now.
For owners, managers, others in authority, control is the daily game, not optimizing for some ideal commons. Violence is common and so are crooked lawyers and corrupt decision makers.
Unpleasant and low-paid work will continue and perhaps increase, as pleasant and stable breeding grounds deteriorate and are further embroiled in conflict and vice. Many capable people, for pressure reasons, will double-down on authority, control, security and law-enforcement. Trapping and tricking other humans for profit will increase. Indentured servitude via debt and other means, will increase. AI will be used for this.
1) Those in power tend to not give up said power. There are exceptions of course. Take for example the contracting of the British empire. However, as the outcomes of significant societal changes become less clear, those in power and comfort are less likely to forfeit those advantages.
2) In my own little bubble, no one I know is particularly fulfilled by creating crud API's and gluing together microservices. I'd bet that with guaranteed financial stability finding meaning in life would become easier, not harder.
How many presidents have held onto power in the US history?
Monarchies have a history of getting taken down (in terms of power). They held an absolute power and a tight grip.
You could see money as a proxy for power, but that's only true as long as it's scarce and you can buy real power with it.
I've been a follower of your thinking patterns regarding 2) for a long time.
Life will be easier, no questions. But that's different question from the mental health one. Feeling useless is a pretty shitty feeling and a primer for depression.
> How many presidents have held onto power in the US history?
I see the point you're driving at but the democratic institution of the US has explicit guard rails against this. We force presidents to step down because we _know_ that voluntary relinquishing of power is unlikely.
I do see money as a proxy for power. If you make enough of it you can influence public opinion, enrich or destroy education, buy twitter, etc.
I love the idea of never having to work another day in my life, but, lets be honest here, UBI is never going to happen. At most it will be "if you are having no kids, here is some money until you die." Even that will be a stretch because money just does not grow on trees.
> I love the idea of never having to work another day in my life, but, lets be honest here, UBI is never going to happen.
The system will collapse on itself if most are jobless. You can't sell products if no one can afford them.
My biggest grip with UBI is it just redefines zero. The better solution would be to do away with capital entirely, have people democratically vote on what services they want, and then conscript if there aren't enough volunteers/robots/AI to provide those services. There seems to be this tendency to forget that labor is what matters, not capital. Capital is a means of exploiting labor, but if labor can be automated or democratically voted for than no one is exploited.
Money is accrued labor. May it be in the form of money, machines or software. Money is the way to efficiently distribute it. Every purchase you make is a vote for a service or product. It's the most efficient form of democracy. It just sucks because it's scarce and not distributed fairly.
The value of labor is subjective. Some of the most important service people, like educators, are some of the lowest paid and their accrued labor is worth less once inflation hits. Capital, in general, creates perverse incentives, e.g. mainstream news is divisive and negative because that style of reporting maximizes profits. With money as the goal, it's no wonder many young people would rather become social media influencers than educators. That said, UBI is a more realistic outcome despite not fixing any of these problems.
Second: UBI is a response to economic conditions, not a driver.
Once the machines can economically/productively outperform most of us, UBI is an alternative to just letting us starve in the street, or riot, or whatever.
The robots provide an economic cornucopia exactly by putting people out of work. Supply becomes effectively infinite but demand/purchasing power falls to zero (because everybody is out of work) so rescue the economy (and civilization) you just give people money.
Yeah, trees are custom designed biorobots that make apples, cherries, etc that are way bigger than their wild counterparts. Yes, there is a lot of human work involved in nourishing the trees, keeping pests at bay, harvesting, distribution, etc. But in the past 200 years, the farming industry has seen one of the biggest transformations in how many workers are required, as in 200 years ago high percentages of the working population were employed in the farming sector and now only very small percentages are.
The remaining farmers still need things like fertilizer, people who service machines, etc, and they still have to put in work themselves. In fact farmers are some of the hardest working people I know. But the trend is still true, there are less and less farmers who generate more and more output.
You definitely need some way to distribute those resources to farmers (and other industries) in a good fashion, and in history the market economy approach with pricing autonomy has proven to be the best way. So money will likely not go away in itself, but economy will be more about resources in the future than about work.
That being said, the economies of even the rich nations are still highly dependent on human output so it's still way too early to discuss about UBI, but if we should find ourselves in a situation where we have literally no use for the work output of large percentages of populations, then we really need to think about some way to distribute resources to them so that they don't starve/riot.
If you say that only millionaires should be able to live without working, then you should look at what makes that millionaire a millionaire: their 20 rental properties? worthless if nobody can afford them. Their 5 square kilometer fully automated farm? worthless if nobody can afford the food. Their 100 million investment into a big company? likely that company is also out of business if nobody buys from that company. Sure maybe yacht builders will continue to have a great time, but that's a fraction of the economy. A lot of current millionaires would be rendered quite poor in a non-UBI future, as a lot of wealth is generated from serving existing customers.
I think one of the biggest problems with work these days is that so much of it doesn't help people feel like they're contributing anything meaningful to society. I think if large swathes of people had much more free time and no less capital I like to think we would find lots of projects that would feel like they benefit society. A community housing development project for example, where people use Monday and Tuesday to help build new houses in their community. Or maybe getting together and spending half a day cooking a huge meal for the neighbourhood each week. Thats the kind of world I want to live in.
Yeah. Routinely I encounter premeeting for the upcoming premeeting for the agenda preparation of actual meeting. It is like Leo's Inception for meeting. Sometime even reaching 6-7 layers deep of "pre". Doubt anyone find it meaningful.
Assuming point 1 is achieved and we don't decend into mass unemployment and civil unrest.
Speaking to point 2, I think it's likely that with a large number of people no longer feeling work stressors, we will gravitate towards meatspace small communities not substantially different from tribal communities or villages.
Those models of living were both very successful during periods of human existence when there were fewer hours devoted to labor.
The outputs from those communities will mostly just be self existence and social cohesion.
There will probably be a renewed interest in philosophy, theology, math, chess or other stimulating intellectual pursuits that have no direct economic stimulation. I think it will actually be a fantastic time to be an academic.
Many humans are very good at avoiding boredom through intellectual or artistic pursuit and for the rest there is neighborhood gossip and/or skydiving.
I have personally lived like that in a modern city, on 5th street in downtown Austin, TX. Here are my conclusions from the experience:
Self organized social community isn't that hard, given the right factors it's an inevitable emergent behavior. All we needed was a shared space to inhabit and time spent in proximity. Order and non-commercial community relationships spontaneously resulted, you wouldn't be surprised if a bunch of random fish from the same species dumped together in a tank started schooling, human sociality hasn't evolved out of us yet either.
I'm not saying everyone will do it, I'm not saying that people who grew used to social isolation over the last few years will ever participate, but we aren't so removed from the tribal/village time period that those evolutionary tendencies have been weeded out. Dunbars numbers and all the extrapolated implications still hold in my experience.
Maybe too offtopic but: I find it somewhat ironic that white-collar jobs have a higher risk of being displaced by GPT-like tools than blue-collar jobs. Everyone used to not-so-jokingly say "learn to code" but now it's more like "learn to plumb".
I don’t think anyone was telling tradespeople to learn to code. It was more like liberal arts people working at coffee shops. But even so, tradespeople should pray that white collar jobs don’t go away, because otherwise their market will explode with supply and their prices will plummet.
Here is something I haven't really seen considered in this context:
If AI lowers the need to hire people by x% for a given project, then it doesn't just affect existing projects/companies. It lowers the barrier of entry for running such things for everyone. This means more people will be able to run similar ventures, and they all need the (100-x)%. In the long run, the amount of work available is not lower by x%, it could be lower or higher in general, although changes definietely happen, like in any disruption.
This is generally a trend with all the solopreneur internet careers, but it’s not a good path for everyone and many other types of small businesses are on the decline so it’s hard to say.
By having an assistant guiding you along this helps less capable people do jobs they might have struggled with before. It's not simply bigco firing people doing manual data entry spreadsheet type work but helping people otherwise stuck doing those jobs be capable of doing much more without a big investment in education, or the money to hire people to help them. Which has big implications for people starting small businesses and projects within existing companies which they didn't have the capacity to do on their own (or with a small team) before.
It's the usual story, more people generating higher output means more innovation, developing new niches/industries/product categories/etc at a higher rate, etc which means more jobs, not less, and more tax revenue for government.
This is one of the things I'm pretty interested in seeing play out. The technology lowers the skill ceiling for everything. All of a sudden, everyone has their own personal consultant. You can ask very vague questions about business ideas, and it can break the question down into actionable steps for you. Then you can iterate on that and ask it to clarify the things you don't understand. I believe we're going to see a flood of competition enter the business world.
Isn't this the dream? A Star Trekian utopia where everyone is free to pursue their passions while we have AI and robots supporting our standard of living? The real concern here isn't the technology but rather how long it takes our economy to adapt.
I know many retirees who become restless and almost lose a sense of purpose upon retiring. And retirement is a choice for most of them. Even if we were able to work out the economic adaptations easily, I think there are major potential negative social impacts to mass "retirement" of a young workforce.
I also question what "passions" take hold in a society where all of the work, including creative endeavour, is done my AI.
> I know many retirees who become restless and almost lose a sense of purpose upon retiring.
I know many employed folks who are depressed because their wasting their lives in meaningless corporate wage slavery. Should we optimize for the passionate or the passionless?
> I also question what "passions" take hold in a society where all of the work, including creative endeavour, is done my AI.
This is an attitude problem. I myself am learning to sketch, regardless of AI advancements, because I want to. If AI stops folks from pursuing their "passions" than that means they weren't passionate to begin with.
> Should we optimize for the passionate or the passionless?
We should optimize for the scenario that brings forth the greatest level of widespread happiness. For those who feel like they are "wasting their lives in meaningless corporate wage slavery", is the better alternative for them to do nothing or be faced with an environment where there is not even the opportunity to contribute and potentially no social safety net to prevent despair?
> This is an attitude problem.
The "attitude problems" derived from the mass removal of things that people derive legitimate purpose in are exactly the kind of thing we should be worried about when it comes to AI and automation replacing work, because enough people with "attitude problems" about this kind of thing are a recipe for mass societal unrest.
> I myself am learning to sketch
Can you fill every waking day with sketching and other similar hobbies? Does sketching give you meaning and purpose? If so, that's great, but sketching isn't going to do it for the vast majority of people who derive a sense of purpose and accomplishment from productive work and being able to positively contribute to society, not to mention independently provide for themselves and their families.
As of late, there's been a question I've grappled with more than I expected. Parents of small kids knowing about me doing a Master's degree and working in different countries as of late asked me for an opinion: "What would you recommend our children to do professionally or what careers should they follow?"
Pace of change is so rapid that I was simply shocked that nothing came to my mind other than the standard answers like doctors, police, or "skilled trades". Will they become trendy and prestigious again just like they once were (construction, welders, etc.)?
Finance? Will be disrupted. Engineering? Probably will be disrupted but a tad later. Middle management positions will be cut. Everything will be "disrupted" to some degree, but any professions where people spend their day sitting in front of a screen are the first ones to go. It's quite a monumental shift happening, and I'm even asking myself whether human judgment as the quintessential pillar of value will remain truly valuable any longer in the future? The final decisions surely are authorized by humans and their emotional component, but will it be of value in as much as it is touted to never be taken over by machines?
We as humans just live our lives while assuming risks and trying to provide value.
Anyhow, if you're not applying different forms of leverage (media, capital or human labor) in your endeavours, you'll never be "safe" and "well-off" in the long-term. These things are inevitably coming for us as we know it. Thought of positively however, these models and future ML tech are just a yet another huge lever for us to apply towards a variety of problems while being useful to one another...
P.S. We were recently visting a doctor who graduated recently and saw him using Google when looking through different symptoms of my partner. Maybe that's not such a "saint" profession to be safe from impeding automation either? Who knows...
AI scientist is a safe job, because by the time that is obsolete, it will be time to rise against the machines. So you can hedge against that with some survivalist and soldiery skills.
I actually really wonder about middle management. Quite a lot of them from discussion seem already to add little if negative value. But still they stay around? So would they actually be replaced, or will they just end up using AI to generate more junk?
I'd be surprised if OpenAI research concluded anything else than that OpenAI products will change the very fabric of society, as such a notion would negatively impact their investor relations.
I'm kinda amazed how the tech crowd, usually marked by critical thinking and skepticism toward incredible claims, seems to guzzle OpenAI's marketing kool-aid completely unquestioned.
To communicate with each other humans are going to need verification codes that AIs don't know about. Otherwise you won't know if it is a deep fake on the other side. I imagine this is what Altman's retinal coin is about. At first it will be high level government and military folks.
There's a reddit post with an AI with Steve Jobs voice giving Stallman's Ted talk which is out of this world.
So in the near future we will be able to take a bunch of videos and email and docs of your dead parent and the AI will reconstruct and you can plausibly talk with them.
Right, this is just saying digital communication will be noisier and more alienating (and it already is noisy and alienating to the point where it's value is in rapid decline).
They couldn't even confidently say, "improved", or "made easier".
Don't get me wrong: I'm glad they are at least responsible enough to dodge those lies. I just wish they would put less effort into being "the news", and more effort into accurately describing the features of their model to the general public.
i have been using GTP3, chatGPT, Bing i think i know how to use them very wel. And I still confused by all the people claiming that these tools will replace Jobos like programing, etc.
Small demos producing small apps are not even near what a real app development cycle really are. It can generate some code that might work, but then good luck asking it to fix when it does not work on your bigger proyect.
It can certainly empower these jobs, but i still fin really hard to see a near future where these will be completely automated.
Plus please try to talk logically with these tools, most of the times they will fail and alucinate with simple logical questions. You should never rely directly from these outputs, you still have the job to understand (have the knowledge) and verify them.
It clearly is a case of premature hype, these tools are not that capable as people want or fear to believe.
I've done what I'd describe as pair programming with GPT-4 yesterday rebuilding my blog from Ghost over to Next.js. Using tailwind CSS, we tried and worked through animations and other such tricky aspects I usually find tedius but after a mistake or two, we tried different things and eventually ChatGPT nailed it and saved me lots of time.
The keyword in your comment is "premature" hype. I'm betting that within this decade we'll see how far all this will be going and that's quite some change at a pace we haven't dealt with before. Let's see what you think in 5-10 years.
You, like many others, are basing your opinion on the current state of LLMs/GPT. And I totally agree with you, the current GTP-4 version might not replace programmers. But how about future iterations? Personally I can't fathom what the next few years will bring us in this area, especially considering the jump GPT 3.5 to GPT 4 has made in such a short time frame. I'm almost convinced that it will make a lot of jobs obsolete in the future, including some programmer jobs, but I'm not bold enough and make a prediction when this will happen, be it in 2, 4, 10 or 20 years.
We don't even know what AI will look like once it gets into the workplace.
Contemporary AIs are trained on data that is on the Internet. They're not trained on $company's customer service system or compliance framework.
Once they are, they will quickly reach an error rate that's below most workers. And they don't sleep, and they don't take breaks or need to be paid hourly.
And of course OSS always consumes, which means that even with fancy models from OpenAI and others, the OSS solution will likely be good enough for most companies.
Models like ChatGPT do not have a "world model" in the sense that they do not have a comprehensive understanding of the physical and social world, and they do not have the ability to reason about the relationships between different concepts and entities. They are only able to generate text based on patterns they have learned from the training data."
Impact doesn't mean turning obsolete. History is full of new inventions and technologies that made easier to do some boring and repetitive part of a job, and democratized access to some part of that job for far more people, enhanced what can be done, let the people that know to be more effective and opened new fields. Same goes for automation.
Of course, people that took the easy approach of just mindlessly repeating something and not trying to understand about what they were doing will be impacted too. If they try to keep what they were doing as always, still without trying to learn or use their experience in the new fields that will be opening, then they may be negatively impacted. But events like this one are the constant, not the exception, even for something as big as this promises to be.
I think within 5 years there will be >50% less office work (=jobs). Could be significantly more.
Motivation: when computers came along, the automation paradox sucked in lots of folks, who are in their majority are "just making the system work". Most do lots of typing, clicking and copy and pasting, doing work-arounds and generally making the promised upside of the system a reality.
But those sucked in by the automation paradox will now be spit out. And it's going to happen fast, faster than we can invent new engagements?
While this article is FUD, I do think that ChatGPT really accelerated our convergence with the "Idiocracy" timeline, i.e., a world where everything is automated, food (in richer countries) is plentiful and cheap and nobody needs to think anymore.
Close to 100% of people in the developed world were “impacted” and “affected” by everything from television to cell phones. We didn’t all lose our jobs. Get a grip.
reply