You must be using an unusual definition of automated. I’d be willing to bet $10 against your 3 year estimate if you can define it more precisely. I doubt it will happen before language models achieve human level perplexity which should happen around 2038.
Also can graphics design jobs really be automated by an AI in 2022? Cause I’d sure love to stop begging people to make icons for me lol
I think the kicker will be when natural language and AI becomes good enough such that a non-technical person can state their intent, and computer generate a working interpretation. Then that person iterates.
An AI being able to create arbitrary computer programs and work productively in an arbitrary codebase sounds basically like an AGI or something approximating it. Not sure if that's what you're implying is a few years away. I do think it's inevitable, but the time horizon still doesn't seem clear to me at all
That's assuming AI still stay as it is forever.
In 3-5 years every job that can be done on a computer will be able to be done better by an AI designed for that job. The only issue will be adoption, not the tech itself.
It's not going to happen anytime soon. These language models take several bleeding edge GPU's to run at a reasonable performance. Vision to the likes of how humans perceive detail is another dimension of complexity. There have been groundbreaking advances in machine vision over the past 20 years, yet we still cannot build a robot that folds clothes efficiently [0]. Arguably, this is not only because of vision, but also dexterity, but both are currently of what a human can perform.
But if it happens, likely legislators will regulate AI. Most western governments already provide job procurement programs through being employed by them. Which is a good thing, but HN doesn't like to hear that. The reality is that most people need a job to feel fulfilled and be a healthy member of society. Which could change, but It's not going to happen until this technology falls into the hands of the common people instead of being controlled by large corporations.
The language models have been around for many years now, if they would go to being superhuman very quickly they would already be there. We are already in the right side of the S curve, many big companies have made similar language models and they all end up at similar performance levels with respect to coding, language, answering questions etc. We will see improvements but they will be very incremental, like GPT-3 to ChatGPT was barely noticeable and took years. Extrapolating that and we will get language models that can be moderately helpful to a human professional, but they wont be good enough to automate much at all.
For the AI hype to pay off we would need all this buzz and activity to result in another new revolution similar to transformers. But without it it will go the same way as self driving cars, barely being good enough to do stuff, but with tons of works and checks it might lead to some automation at scale that could threaten jobs in a decade.
How long is that though? Decades? More? How do you train a neural net to take a vague description of a desired outcome and produce results that fit the requirements? We are no where close to the general AI that would be required.
I certainly believe that classes of automation problems will be done by machine learning but I have a very hard time believing all programming tasks are going to be replaced any time soon.
That's like the next decade, not century. AI will render most of those tasks obsolete , to the point where 'computing' will hardly be a recognizable industry by 2122
I think the time-frames of when this will happen are debatable, but not the general arc of events.
Machines and AI continue to get incrementally better. There is no reason to think that this incremental progress will magically stop at some threshold which leaves a large swath of employment for humans to perform.
For computer work, I think there will be two category: Work with localized complexity (ie: draw an image of a horse with a crayon) and work with unbounded complexity (adding a button to VAT accounting after several meetings and reading on accounting rules).
For the first category, Dall-E 2 and Codex are promising but not there yet. It's not clear how long it'll take them to reach the point where you no longer need people. I'm guessing 2-4 years but the last bits can be the hardest.
As for the second category, we are not there yet. Self-driving cars/planes, and lots of other automation will be here and mature way before an AI can read and communicate through emails, understand project scope and then execute. Also lots of harmonization will have to take place in the information we exchange: emails, docs, chats, code, etc... That is, unless the browser is able to open a navigator and type an address.
I wonder how it will be able to do that for the tech that will be current in 10 years, if mostly everyone will be using AI by then instead of asking on Stack Overflow.
likely true now, but can you be certain in 3 to 10 years? how much does language make up the fabric of what we are? How we work, how we understand the world, etc? Now imagine giving language models vision and audio and a body... and the ability to actually retain new knowledge (the core thing I think modern ai models lack).
One day eventually we will have general human equivalent AI, at least for the vast majority of work tasks. Sure, but that's as much a premise for a science fiction story as a prediction about the future.
We are absolutely nowhere near even close to beginning to know how to even start building such a thing. Chat bots, language models and image generators are fun tools that look amazing to people who don't understand how they work, but they're extremely rudimentary compared to real intelligence.
I'll make a counter-prediction. All the low hanging fruit in language model development have been picked. Like all technologies there's a steep part of the S-curve of development and that's where we are now, but you can't extrapolate that to infinity. We'll soon hit the top of the curve and it will level off, and the inherent limitations of these systems will become a severe obstacle to further major advances. They will become powerful, useful tools that may even be transformative in some activities, but they won't turn out to be a significant step towards general AI. An important step maybe, but not a tipping point.
I think you are right in the near future but not today. I don’t think we are quite there yet with AI but the pace of improvement is breathtaking and producing code (and more generally “logic” which could be code but also machine code, system design, LLM prompts and so on) is such a pot of gold the AI companies will go for it.
Also can graphics design jobs really be automated by an AI in 2022? Cause I’d sure love to stop begging people to make icons for me lol
reply