Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>It’s as-if AI is exclusively made by Mormons.

A weird mixture of degenerate unconstrained crony capitalism/VCs and purity spiraling techbros. No small irony that this timeline is the one where occupy wallstreet was distracted and destroyed by injecting any possible controversy they could into it.

Don't think about class and money, think about anything else. It's ok to centralize technology, capital and power in the hands of a few firms on the west coast of America, as long as those trolls on the internet are thwarted

I just pray the EU doesn't fall for this garbage.



sort by: page size:

> why would we insert this not-even-stupid noise into the information space?

Because US tech would "collapse" otherwise.

Big Tech's stock price is entirely propped up by AI hype. Metaverse didn't go anywhere so they need a new bubble.

US tech startups are even worse. The VC culture is completely incapable of building useful companies, AI is the last hype. Just a rat-race to grab the last funding and get bought out before the end of ZIRP kills the startup scene entirely.

There's no real demand for any of this shit. Even the web spam renaissance is mostly hustle-bros losing money. The writing is already on the wall with investment slowing down, and AI product after AI product bombing.


> seems seriously threatened going forward in the AI and super-internet-company age

The real threat is ai companies stealing people’s products and reselling them.

It’s as if they stole the furniture luddites made, dismantled it, reassembled it and the sold it for a profit.

A real shame that ai has been taken over by Sociopaths. I suppose punishing one or two will send the message.


>> Silicon Valley is overrun by techno-utopians.

That's what it sounds like to me too. I find it particularly hard to reconcile claims of liberalism, or even libertarianism, with an industry led by Google, Amazon and Facebook, companies that basically make money by running roughshod over their users' privacy.


> I don't agree with the author: technology such as A.I. as a means for exerting illiberal control on citizens is not inherently a feature of capitalism.

I think you're technically correct here, but, it is a feature of capitalism that only a small number of wealthy elites control this technology, especially given the incredibly large costs involved in training and maintaining AI models.

So we're largely at the mercy of these people and the corporations they control on whether they choose to implement it in an illiberal manner. That is, we're essentially relying on our technological overlords being of the 'benevolent dictator' type, regardless of profit incentives presented to them.

Personally, I don't have much hope in this regard, given what we've seen already.


> most people care about $ not truth

Yeah, computer technology is exactly like this today. What's it about? Consumerism. Engagement. Advertising. Surveillance. Corporate bottom lines. Government control.

It's such an incredible waste. The potential of computers used to be limitless. They empowered people. They once threatened monopolies, governments. To see them reduced to the state of appliances serving the very same elites they were supposed to free us from is just sad.

> Science and computers should be accessible to everybody, not only DaVinci's and elite programmers, and that's the way it will go.

That's okay. I just wish things were different.

Science fiction predicted the creation of AI. They were human-like. They were our friends. We could trust them. The dystopian cyberpunk hell we're heading towards has AI as a tool of corporations and the state, used to control people, exploit them. It has masters but they're not us. The AI snitches on us, reports our wrongthink.

> I hope you can find joy in life elsewhere in your mind/activity.

Thanks. I hope so too. I've found meaning in other activities but nothing matches the godlike feeling I used to get from programming.


>We cannot trust every random with a laptop with potentially civilization-threatening technology.

IBM in 80s moment. This is a mind boggling POV. Instead of letting people have better, more democratized access to the same tech, you want to lock it up behind bars so only select few have access to it.

>So who should control AI, the government or corporations?

The FUCKING PEOPLE.

Yeah man, only let govts and corpos have access to this amazing tech. they've TOTALLY always worked out in the favor of the people. They always want what's best for the people and it's not like both govts and corporations want to keep increasing their own powers at the cost of their citizens.

Ridiculous take. But then again, most pro-govt and pro-corp takes are.


>All of this has happened before, many times over.

Sounds good on paper, but the nature of modern day life is far different than it was when other revolutions occurred.

Yes, revolutions in tech are just that - they up-end the old way of life, the dust settles and a new way of living emerges.

This is so basically obvious that its not worth mentioning.

What is worth more than mentioning, but truly paying attention to, aside from Thomas' analogy to a neighboring mcdonalds, which is an actually insanely lazy look at this matter - especially coming from one who is a security expert, is that the level of "entanglement" into our daily life with massive core-AI systems, such as the capabilities of GPT-'N' has, cannot be understated.

When cars overtook horses, we could weaponize cars by putting guns on them and calling them tanks. But once you put a gun on a car and call it a tank, it can only shoot at one target at a time.

With AI, when it provides the substrate upon which you will interface with the information relationship you have the rest of global reality, can be easily weaponized, not just against you, but anyone and anything interfacing with it, its important to know who, or at least what incentives, truly owns, or at least is the lens through which one interfaces with whatever information that system provides.

SO, aside from 'doom-and-gloom' -- its really important to really understand as much as possible, and rather than Thomas' just throwing up his hands (which again is bizarre from a security guy just saying he doesnt care who runs the black box, is weird) - I want to know how whatever that system is (openAI/whatever dominant AI) and who made it such and what agency I still retain in my digital life.

--

@Unity:

Entanglement of big business and government in our lives was and will always be - what big-core-AI provides both business and governments (which now there is vanishingly thin membranes between the two these days) complete solvent for any friction for said entanglement.

AI's capabilities for extracting indivulaized-insights-AT-SCALE is what AI provides to both, with zero recourse from the individuals. That is the "alignment" problem that Aza and Tristan are pushing back against.

Also- I do not "worship" or "idolize" SA, and especially Thiel - I am terrified of them. That's why I said that they may attempt to appear as a Carnegie, but are actually a Rockefeller (Carnegie attempted to wash his reputation with endowments - Rockefeller was just an evil oil-carpet-bagging C*Sucker and an evil person (he is the reason we have "fossil fuels" as manufactured scarcity for profit)

I have no idea why you think I idolize any of this - I am fully, and even further, into the Raskin-Harris camp... alignment regulation is going to get trampled - and to be honest, I have serious questions about SAs motives...

I'd bet he's already got a team working on the plans for his bunker/lair, and taking notes from the Zucks and Thiels on best design resources.


> But these commons are now being overgrazed by rapacious tech companies

Thats only part of it. These AI are usable by individuals too, which means any casual scammer can use them to poison the well for a nickel.

The only thing that can save us is critical thinking and institutional trust... hence we are probably doomed.


> These companies are not trying to take something from you but are in fact trying to give something to you.

That's incredibly shortsighted.

These companies are only concerned with hyper growth and pleasing their investors, and have zero regards about the user, their privacy, and the harms their products unleash on the public. The benefits they offer to the user are tangential to their goals. This has been the MO of every tech corporation in the past 2 decades, and by all means has grown out of control. The oncoming AI disruption will exacerbate this even further.

The tech industry needs much more regulation, not less. If anything, the EU is being conservative in the scope and lagging behind on the timing of these laws.


> Main thing I wanted to say is that we all love to shit on how Silicon Valley has basically gotten rich off investing in websites and SaaS products over the past ~15 years

500 years ago, these people would've chased Jewish financiers out of strongly catholic areas, or shit on Dutch merchants getting rich in the spice trade.

When it comes to human nature, things don't really change.


>Reminder to Silicon Valley: there’s an even chance that many governments in the world will be run by authoritarian movements in the future. Your window to deploy surveillance-resistant systems is running out.

Tech companies don't have a great human rights history when there is money to be made.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


> What I learned so far is that Silicon Valley doesn’t actually believe in anything. Ideas get adopted if they’re profitable. That’s it. There’s no morality involved in this machine.

I’d argue that is more of a characteristic of Capitalism at this point.


> It is the fear that a greater portion of the value of this technology will go to the stockholders of said companies rather than potentially be shared among a larger percentage of society. Not that I had that much faith in OpenAI, but in general the shift from non-profit to for-profit is a win for the few over the many.

You know what is an even bigger temptation to people than money - power. And being a high priest for some “god” controlling access from the unwashed masses who might use it for “bad” is a really heady dose of power.

This safety argument was used to justify monarchy, illiteracy, religious coercion.

There is a much greater chance of AI getting locked away from normal people by a non-profit on a power trip, rather than by a corporation looking to maximize profit.


> There has never in all of history been totalitarianism like what is possible now. We are building the technical infrastructure for our own total enslavement.

This is one of the primary reasons that I'm planning an exit from software development. I see most of what we do as being a useless waste, and at worst we're building our own enslavement. I'm fortunate to be working on something harmlessly entertaining, but I want to exit all together and start a business that will actually help people.

The unbounding optimism of technologists is pretty disturbing. We can see for ourselves how tyrannical those at the top of our industry can get, yet we are easily chatteled into helping them because of the paycheck and the prestige. It should become less socially acceptable for us to assist tech zealots who want to involve data warehousing technology and AI into every single problem and facet the world is facing.


> This will be the greatest act of Intellectual Property theft in history.

Good.

Intellectual Property is a mistake. If AI brings about its end, I welcome it.


> wouldn’t that be a good thing?

Only if you like technofeudalism—it’s not like you’re going to own any piece of that future.

Have you noticed AI becoming more and more open source like it still was at the start of the year, or has that kinda seized up? What gives?

It’s called a moat, it’s being dug, you’re on the wrong side of it.


> I don't think any of the big tech leadsership actually beleive in that.

I think Altman actually believes that, but I'm not sure about any of the others.

Musk seems to flitter between extremes, "summoning the demon" isn't really compatible with suing OpenAI for failing to publish Lemegeton Clavicula Samaltmanis*.

> I also think that deep down a lot of the AI safety crowd love solving hard problems and stock options.

Probably at least one of these for any given person.

But that's why capitalism was ever a thing: money does motivate people.

* https://en.wikipedia.org/wiki/The_Lesser_Key_of_Solomon


>Anybody can be 'possessed' by an idea, but it is clear that Silicon Valley has too many smart and well educated people wasting their time on trifles, their choices significantly retard what ought to be natural advantages.

Look, I like criticizing Silicon Valley for its ad-dollar silliness as much as anyone else, but that's fundamentally down to what VCs and angel investors are willing to fund. Everyone would work on Mars colonization, self-driving cars, and artificial intelligence if they could. Nobody pays you for that, though.


> The problem has generally been threefold:

> 1. Tech is dominated by “true believers” and those who tag along to make money.

> 2. Politicians seem to be forever gullible to the promises of tech.

> 3. Management loves promises of automation and profitable layoffs.

I would like to add another:

- While cheap, low interest money has dried up, VCs are still looking for problems that require considerable capital for hardware (rather than skilled labour), in business area that promises some kind of amazing scalability.

With crypto gone, AI is filling the gap. And nVidia is selling the shuffles

next

Legal | privacy