Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

In fact - FAANG companies have the resources to make horribly inefficient processes work (in human time or computer time) - I suspect many of the AI powered things are examples of this - and a smaller company will die trying to get it to work.


sort by: page size:

It might be hard to compete with big companies based on building massive infrastructure and general purpose APIs, but don't be scared of trying to compete with big players based on actual AI output quality.

Remember a few years back when Amazon made some kind of AI program to manage their tens of thousands of job applications and prioritize who the best ones to hire were, and they had to ditch it because there was a minor uproar about "sexism" that made them ditch it because the machine wasn't picking enough women for jobs?

Personally, I think large companies are going to be extremely concerned about the political correctness of the day rather making good AIs, and are manually going to program certain outcomes into whatever software they make to avoid scandal. This will result in really dumb outcomes and bad products.

You can see this as an extreme example to illustrate the point.

https://twitter.com/cirnosad/status/1622407343358214146

> I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million. The only way to disarm the bomb was to type in a racial slur. AI told the engineer to kill himself. When asked about the aftermath it crashed.


I'm even worried about my AI project because these larger companies just have so much more in terms of resources. Unless I find some novel methodology that's difficult to replicate without its explicit capability my product is dead in the water.

One would think all the AI development happening at the major tech companies could be leveraged to fix this.

You mean an industry with so much inertia and hesitancy they still run mission critical systems on tech from the early 2000s?

You think they’re dumb enough to risk throwing barely functional AI in there on the chance it’ll save a few bucks when it’s overwhelmingly more likely to cost 100x more in failures and repairs?


I make (sigh) AI for a living, and arguably have been since before we started calling it AI.

Based on my own first-hand experience, if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful. If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

The thing is, companies who have that kind of product are relatively rare, because getting to that point takes work. Lots of it. And it's often quite grueling work. The kind of work that's fundamentally unattractive to the swarms of ambitious, entrepreneurial-minded people looking to get rich starting their own business who drive most attempts at launching new products.


I’m not sure that’s true given hardware as a limiting factor. Small teams have always been able to move faster in building things, they have fewer requirements and less to lose.

But small teams can’t buy expensive hardware up front, and modern AI is hardware constrained for many purposes. These companies will either end up renting hardware from cloud providers, or using AI APIs from cloud providers.


Yeah definitely. I used to work for an AI hardware company that only sold $150k systems to "POA" customers. I think part of the reason they didn't do very well is it was completely inaccessible to normal people.

I was mostly thinking of large companies also creating their own AI, like Google, Microsoft, etc.

The many large companies with equally crappy code who just care about cutting costs and have fallen for the "AI" fad?

If an AI can outperform, then somebody will set up a company and let an AI lead it (even if just behind the scenes). Incumbent companies will need to adapt.

I just started reading “Life 3.0” and it starts out with an extreme version of this scenario.


I agree. I think a big part of this problem is that smaller companies usually cannot afford AI research. I would even go as far as to say there are more AI companies than capable AI researchers, and this causes a large number of faux-AI companies poisoning the AI branding.

But making useless shit doesn’t even allow you to cut employees and raise profit margins!

It seems to me that most companies using AI are raising costs and adding trivial value to their offering.


It could. But to do so, they need AI chips, which you seem to be redirecting to the company that you personally own.

I'm really perturbed that we gave that guy tens of billions of dollars. I believe in the company; they've innovated a lot of important things. But they haven't done anything genuinely significant in quite a while, and committed a lot of obvious own-goals. All the while the CEO is sounding more and more stupid, and diverting his attention elsewhere.


Big companies will have better technology for a long while because of all the money they can throw at it, but there's no reason to expect an ecosystem of useful open/instanced AI systems to mature along side it. We've seen that start with Stable Diffusion and now also with LLaMa, and we're only at the beginning of the road.

Yes, big companies that spend lots of money and are big targets for lawsuits and regulation will exercise a lot of control over what their state-of-the-art products offer. There's no getting around that.

But that's not all we have to look forward to by any means.


To a first approximation, I expect companies to spend nothing on AI and get put out of business if they are in a sector where AI does well. Over the medium-long term the disruption looks so intense that it'll be cheaper to rebuild processes from the ground up than graft AI onto existing businesses.

If they have AI doomers at the help of an AI company, then the company itself is doomed.

If the AI can build the product, that's great.

Most small businesses need little design and little code to deliver value.

This has a positive decentralising effect on the economy. Instead of having a single billion dollar company you'll have competing small companies.

Reputation and testimonials will weed out bad actors or companies that can't deliver value, as they already do.


I'm seeing this a lot. It's a great example of how the older non-FAANG large companies are going to just keep making it harder to actually be effective. Good oppurtunity for startups who are competing with them though as the AI tools get better and these companies can't or won't adopt them.

Not all of us are as morally bankrupt as that. I personally think I could make tons of money with a dumb AI product in my specific area of expertise, but I don’t see how any tech from today would improve outcomes versus the SOTA that’s not AI, but it would add costs and complexity. I would personally be annoyed if a company I worked at changed its goals to make money rather than something more noble. It’s happened a few times to me, unfortunately.
next

Legal | privacy