Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
I’m taking some time away from Comma (geohot.github.io) similar stories update story
381 points by ra7 | karma 14895 | avg karma 10.99 2022-10-31 10:05:18 | hide | past | favorite | 437 comments



view as:

His writing is even more self aggrandizing than Stephen Wolfram’s!

That's no easy feat

Nice to know, in this dark age, there are still true Heroes walking among us.

When it's your actual track record it's not bragging. It's just facts.

edit: apparently this is a hot take to some, but no real substantive refuting of what's being said- only appeals to greater authority. "Well so and so was smarter"


Geohots actual track record isn’t that impressive though, he’s just one exploit dev with an OK portfolio.

>edit: apparently this is a hot take to some, but no real substantive refuting of what's being said- only appeals to greater authority. "Well so and so was smarter"

Do you happen to have substantive evidence to back up the original claim?


It's always fun to see anonymous users saying someone else isn't "that impressive though". He managed to breach the security of three different platforms (iOS, Android and PlayStation) across multiple devices, which feels a lot like more than most security researchers manage to do in their lifetime, while geohot is about 35 by now.

Or, you have some good examples that the average security researcher have a better track record than that?


I knew a few security people at Google and frankly, geohot isn't even in their class. He's a smart guy. Did some stuff. isn't particularly special or significantly better than a typical engineer in my experience. Mainly he seems to jump deep into things he doesn't know much about (I'm talking about post-security) and say a bunch of irresponsible things then fail to deliver a complete product, entirely predictably.

He's the sort of person you want to ensconce in a safe place far from others, with the freedom to experiment, to come up with disruptive ideas to influence others.


> I knew a few security people at Google and frankly, geohot isn't even in their class

Who, exactly, are you talking about here? And what are their track record?


> I knew a few security people at Google and frankly, geohot isn't even in their class.

This is an exceptional claim, who might these people be and what initiatives do they lead?


The first one I'm thinking of is Michal Zalewski who wrote AFL and is now the VP of Security Engineering at Snap. Also Lea Kissner who implemented security for stubby and now runs Info Security for Twitter (one of the few remaining engineering departments Musk wants to keep) But there were a bunch of folks, most of whom you've never heard of, because they were quietly solving problems behind the scenes. Damien Menscher comes to mind- he was core to Google's DoS blocking which had a massive impact on users for the last decade.

> there were a bunch of folks, most of whom you've never heard of, because they were quietly solving problems

I figured you'd say this. Just had to see you type the words out. Don't be angry when those you seek to refute don't see you worth arguing with when this is the type of rhetoric you have.


Sounds like you're trolling? OK/ If you're operating in good faith, I honestly don't see your point about rhetoric. His performance is not atypical in silicon valley, a place where I have been an engineer and academic for 3 decades. That's all.

This is a lazy troll. Geohot is famous because he used to loudly brag about his achievements.

The people making millions selling exploits to the likes of Vupen/Zerodium have very strong incentive to keep their mouths shut.


> Or, you have some good examples that the average security researcher have a better track record than that?

Geohot can be simultaneously be significantly above average, and not anywhere near as special as some people on HN like to represent him.

He certainly doesn’t have anywhere near the track record to not seem downright unhinged in the context of this blog post.


Getting a cease and desist from Sony for finding the master key is pretty impressive.

Except he also jailbroke the first iPhone. Doing it twice is beyond impressive and proves he's the real deal.


He's a smart, talented guy, but it takes a very different set of skills to break other peoples' stuff than it does to create it in the first place. George never had much patience for the organization of people -- he's got something of a superiority complex, or at least he believes other people are almost always focused on the wrong things. This bias is true, to a degree, but it isn't perfectly true, and George was always destined to run with those who actually can and do give him a run for his money, and to run up against the limitations of his technical abilities.

Hotz has always come across as very arrogant to me. Claims everyone else is doing things the wrong way, claims he can do it himself in 24h, then never delivers on his words.


I just want to point out that Hotz delivered when he was doing hacking related work.

He's very talented, I don't want people to think he's a scam artist.


He's talented and arrogant and promises more than he can deliver.

So what?

Lately it seems like there are so many accusations of over-promising, but no one explains why this is harmful. Misleading investors? Doesn’t HN hate big money anyway?

In so many cases it seems like someone promises 5x and delivers 3x… I’ll take it, over 0. Then I’ll remember and apply an adjustment factor to future promises.


I agree he does come off as arrogant at times. There are plenty of times when he's right, though. Those times are pretty fun to read about: https://twitter.com/jinglejamop/status/1310718738417811459

I want every last thing on this website related to cryptocurrency to be annotated as such.

everything about it is just a supreme waste of time and energy. including my own.

everyone is a genius at something and the easiest way to negate any positives that you could possibly provide is to be an arrogant prick, and geohot is exactly that.


While I agree on crypto in general, and on arrogant pricks in general, it can be extremely satisfying to have an arrogant prick who is right and on your side.

An interesting anecdote about a deeply technical problem, with a clever solution, is instantly critiqued, because of an aversion to the topic, and the perceived arrogance of one of the persons involved.

This level of cynicism and irony, especially on hackernews, is hard to top.


This is a weird comment to me, the guy is clearly exceptional. He's written security exploits that have been seen across the industry and in publications and has touched various spaces across engineering. His insights on self-driving in particular were just far ahead enough in their time that now Tesla Engineering is doing what he claimed was the only practical way to deal with SAE level 2 driving, without radar.

Can't he be arrogant and exceptional at the same time?

In his last talk with lex Freidman he specifically predicted that Tesla would switch to vision only.

I mean he delivered pretty good car assisted driving software (and hardware) and with a company that did raise 100 billion $ to do it. That seems pretty impressive to me.

In some of these comparative rankings a device that runs open source software and cheap hardware beats most of these driver assist systems that companies put in their car. All of those system probably had 10-100x more developer time put into them and work on fewer cars.


Hotz's story is our story. He is the embodiment of the top comment of every HN thread in existence. "Of course the author doesn't know a damn about what their writing about. They're wrong. And no I won't deliver a better solution," is Hotz in a nutshell. To be critical of him is to be critical of us. It's great if we can do that, but I'm not sure we can. There's a certain essence of digital rhetoric that runs through our veins and we wouldn't be the same without it.

Well put, except occasionally he actually did put his time where his mouth is when he was interested enough in the problem.

> There's a certain essence of digital rhetoric that runs through our veins and we wouldn't be the same without it.

Both in the case of HN and with Hotz, I'm 50/50 on whether this is even a bug. It's hard to do anything interesting if you're not both highly opinionated and highly critical about the current way of doing things.

That said, I think some of us are more self-aware of this mentality than others. In my case I know that self-awareness has emerged slowly from being wrong a lot, which I suppose isn't the case with someone as successful as Hotz. Maybe he's never needed to self-reflect in the same way some of us have, or need to. I know I was a lot like him when I was younger anyway.


I think one can be opinionated and critical without being arrogant. I do not know who the author is, but reading the post, I'm getting huge Main Character Syndrome[1] vibes. I mean, calling it "The Hero's Journey"? Seriously? Sure, buddy. The world is a story and you're the protagonist hero, venturing forth among a world of NPCs. I don't care how much of a literal genius someone is: that kind of life attitude is pretty off-putting.

1: https://www.psychologytoday.com/us/blog/digital-world-real-w...


One can have nerve without being arrogant.

One can have confidence and still be humble enough to be open to being wrong.

One can be idealistic without being cynical.

And you, and the people around you, will be happier and more effective because of it.

Some lessons I wish I had learned earlier in my career.


I do think we'd all benefit from second-guessing that arrogant part of us before we speak.

Even if that is true, people don't point at such a top comment and say "here is an expert".

Nah. But I do recognize his story. I've worked with many people who have the same story, not all of them software engineers. They trade on the fact that it only takes 20 percent of the time to get the first 80 percent. So they do the 80 percent, and people are impressed. Then they bail. It is pure genious, because you leave on a high note - knowing that the last 20% is going to be difficult to impossible to deliver on. It is where all the problems built up while building the easy 80% get solved. That is the boat.

> Nah.

how ironic


Yes it is the perfect post, no one can disagree with it!

> He is the embodiment of the top comment of every HN thread in existence.

Occasionally, but I think HN comments are usually more cynical than arrogant. More often, it's pointing out flaws without saying, "I could do better."

I do think that we, as a community, would be better off if we approached those flaws with more open-ended curiosity rather than mere dismissal.


He's a 10x PoC engineer.


Hotz - long time reader here. Good luck with Tiny Corporation. Don’t stop writing these blog posts they are fantastic.

Thank you @ra7 for posting this.


OT: Sorry to hear what happened to you at Twitter (:

There is no room for innovators in the autonomous driving game. All the startups predicated on fast innovation will be shaken out. Most already have been. Autonomous driving is about delivering quality software. If you're not a software company with a solid platform, you aren't in the game.

As this blog post illustrates, the whole comma.ai thing was pitched based on vibes anyway. There never was a path to viability.


> the whole comma.ai thing was pitched based on vibes anyway

How else do you convince people to take risks?

With hard evidence of traction? Then what the fuck do you need money for?


agree in a sense - autonomy could take a generation to deliver, and it isn't clear how a small "traditional" startup can really participate

and everything in tech attracts wannabes and grifters making a cashgrab


What definition of 'viability' are you using. As in the company is not viable?

Damn this is very disappointing :/

One of my favorite tech articles of all time involved this guy. He was showing off the self driving capabilities of his framework and took a reporter out on a drive. Everything went well and while they were wrapping up the interview the reporter said "Well I bet you're just driving around all the time hands free, it must be amazing" and Hotz says "Oh well I just got it working this morning".

Classic. I love it. My kind of engineer.


That kind of attitude is great for editor or compiler or game development.

It is absolutely the wrong attitude for self-driving cars or anything that is safety critical.


To be fair, he was recently in the news saying self-driving cars are a scam. https://cleantechnica.com/2022/10/09/george-hotz-autonomous-...

Self driving car companies, not cars are the scam. And he has been saying the same thing for years.

"I am a charlatan and therefore this whole industry is a scam" does not have great logic to it.

"I couldn't get this working so nobody else can either". Classic gifted kid response to failure tbh.

>autonomous cars are no closer to reality today than they were 5 years ago

Why do people keep printing things like this, which are objectively wrong? I have had a completely autonomous waymo come to my location, pick me up, and take me to another location.

That didn't exist five years ago, it does exist now. How is this not "closer" than it was 5 years ago when it literally exists now, and didn't exist then?


I think it’s the hard parts are still just as hard.

I don't really know the state of the art in self-driving. But it might very well be that it wasn't the technology that changed in these past 5 years but the regulation around it as incremental changes were made to regular cars which have led to regulators consider that the technology was, after all, safe enough for these use cases.

Not that the technology hasn't improved, but with these things there might be many factors involved that might answer the question "why we have this today and not 5 years ago".


because that's like arguing that because you can now climb your way up a tree which you couldn't do five years ago you're closer to climbing to the moon.

Self-driving is still incredibly limited and progress is often overstated because people make headway on some tiny issue. A thing I always liked for people who think progress is rapid, this is Germany in the 1980s where Ernst Dickmann had autonmous cars drive thousands of miles: https://youtu.be/_HbVWm7wdmE


What?

No, it would be like if somebody said "some day we will travel to the moon", and then after the Apollo missions there were articles being published that said "we are no closer to traveling to the moon than we were 5 years ago".

https://www.youtube.com/watch?v=AHdKm0kW4l0

This is a video of a person riding in a fully self driving car.


That is not a fully self driving car in the sense of what people think when they say a fully self driving car or are you saying that I could order that car and it will drive me to New York under any conditions that a human would drive through?

I wrote the below comment, then watched the video, then deleted the comment; I say that the hard part is the human level understanding which doesn't exist. Watching the video it's notably in a clear, bright, flat, low-traffic, wide-road, few-people, few parked-cars, little going on, ideal conditions. But why hold to my position in the comment below, if that clearly is a self-driving car, just because it isn't climbing a slippery hill at dusk by people double-parked outside a nightclub with drunk people stumbling around. If it can get to useful amounts of humanless driving in real-world conditions which were not custom made for them, that has to count for something.

----

Eliezer Yudkowsky is fond of shitting on the AI developers of the 1960s for thinking they could write `APPLE` in the source code of a symbolic language and that that made them weeks away from a human intelligence which could reason about apples, and how simplistic that looks now.

Like YouTube auto-transcribed subtitles are useful but they are obviously transcribing sounds without understanding, they lack understanding of where the context indicates that a spoken thing should be a name, or they will transcribe the same word two different ways in two different sentences with no understanding that it was the same object as before being referred to again, or where a sound is unclear I can fill in what was intended but the auto transcriber can't, or I can see from lip movement that the transcription was wrong, the audio processor can't integrate multiple inputs in that way, and they will transcribe sentences which are grammatically correct but human background knowledge of the world tells you it makes no sense.

Similar with self driving cars, it's pretty clear from the outside that you can't have a car which can reason about the state of a city, its roads, the things in the roads, the environmental conditions, without having a large amount of interconnected human level background understanding of the world and the things in it. e.g. not just seeing a shape and identifying it as a cyclist, but knowing that you passed a cyclist a few seconds ago and now you are slowing down for traffic lights the cyclist will be coming back alongside you momentarily. Not just identifying a parked car, but seeing a car stop moving and turn its lights off as it parks implies the doors are about to open. Not just seeing lane markers in the road, but seeing no lane markers and being able to complete the pattern of where the lane markers should be because you understand how humans design roads. Not just seeing rain and slowing down, but the hinkiness feeling of "these conditions are dangerous" from the way other cars are driving, the road conditions, and slowing down in advance of anything objectively happening because you predict what could happen. Not just seeing a sign saying 'Diversion' but being able to look around expecting to see the next diversion route sign either down this turning or up ahead by another turning, and using that extra information to decide what to do. Not just identifying an erratically moving vehicle when you see it, but hearing a siren and seeing a flash of blue in the mirror and thinking ahead that an ambulance is coming and then looking for places to pull over to let it past and expecting the cars around you might move like that as well. Not just seeing the car in front slowing down, but seeing the driver inside it move and understanding that they ware waving you past because they are double-parking to drop someone off or pick someone up instead of slowing down because of traffic. And countless other situations.

Humans have good reaction time when it comes to touching something hot and pulling our hands away before we understand and are aware of what happened. Sensor equipped cars have good reaction time when it comes to ultrasound sensing a thing up ahead and applying the brakes without understanding what's happening. Humans have bad reaction times when driving because we can't feel the thing in the road, it has to go through our slower higher level thinking to understand what's happening before we can choose to respond.

Self-driving cars, then, are either the pretense that you can put a human level AI on top of the car's unconscious reactions, without compromising the reaction time, to get a superhuman level driver. And that's not something you can do because human level AI doesn't exist. Or they are the unfounded claim that you can drive through humanspace without human understanding, which is about as convincing as saying you can send a machine to the butcher, baker and candlestick maker to do your shopping without it having any AI. As soon as anything goes off-plan the robot is stuck. And you get into "well, we'll hard code a workaround for this situation and simply enumerate everything which could go wrong in a decision tree". Shop door closed with a sign saying "please use other door"? Hard code that, OK now are we good? Shop door closed with a sign saying "please ring bell for attention"? OK, hard-code that, now are we good? Shop door propped open with a mop and bucket and a sign saying "caution, wet floor"? OK, hard-code that, now are we good? Butcher says "sorry we have no liver but we're expecting a delivery in 5 minutes are you OK to wait?"? OK, hard-code that, now do we have AI? And then you get to Amazon which controls the warehouse layout, temperature, environment, shelving, can put tracks in the floor, put all items into regular sized boxes tagged with machine readable labels, which is more analogous to trains and trams on rails, and still Amazon use humans to pick and pack things.


I think Waymo first launched early access to nobody-behind-the-wheel hailable rides in Arizona almost exactly 5 years ago: https://www.theverge.com/2017/11/7/16615290/waymo-self-drivi...

The situation you're describing is no different to 5 years ago: autonomous vehicles exist but can only operate in a limited environment. That's where Waymo was 5 years ago, it was just an even more limited environment. Read the "Road Testing" section on Wikipedia, specifically, 2017.

https://en.wikipedia.org/wiki/Waymo


>it was just an even more limited environment

Huh, so are you saying maybe we're a little close to autonomous cars than we were 5 years ago?


If one believes that autonomous cars with no limitations on environment will never exist, then no progress will get us any closer to infinity.

If you think geofencing scales to solve the FSD problem. Otherwise no.

sure, “no closer” is hyperbole and in the most literal sense of the phrase, we are closer because time has passed… but in the practical sense, we are closer today because testing is going well and so permission has been granted to expand testing — the technology is not meaningfully different, what’s happening today could have happened 5 years ago (if safety regulations were more lax and had permitted testing with less data).

So what have the engineers working on this been up to for the last 5 years? Nothing?

There's "asking clarify questions", then there's "intentionally missing the point to be argumentative".

It's different because San Franciso is not Phoenix. It's much harder.

Do you expect to wake up one day and have self driving cars work in every city? That's just not what today's technology can accomplish. You either end up with broadly applicable L2/L3 (Tesla, Comma) driving, or you get narrow scoped L4 driving.

The scope of L4 widening is a real change.


Waymo was driving in SF 5 years ago.

Waymo driving has improved a lot in 5 years. A proof of concept is different a from production service.

The scope is larger.

True, until it can off-road in the amazon rain forest it has not improved.

"I have had a completely autonomous waymo come to my location, pick me up, and take me to another location." - Waymo uses 3d mapping, limited geofencing, remote operators and mobile roadside assistance teams because those cars are not even close to any type of autonomy. Those cars are "mice" in a well designed and designated (inch mapped) maze. The car without a driver in the driver's seat is like David Copperfield flying on the stage in a cheap magic show, in front of a few hundred people that paid $50 for the tickets - see https://youtu.be/qZS9maIq_Zc

Does it matter? They are functional and safe enough for most sunbelt cities. We may not have FSD from day one but what we do have is leagues ahead of what's possible 5 years ago.

Waymo in its current form in Arizona launched about 5 years ago.

What these failing companies are doing for almost 15 years now is 1 step forward, 3 steps back while promising they are 6 months away from the impossible. A.I. is only pattern recognition software statistical tool, that has zero capability of learning by itself from previous experience, and that shows you how any business designed around updating the constantly changing environmental data required to make the robots operate at a decent level, is prohibitively expensive.

Anyone can use 3d mapping and geofencing. That's not a disqualifier.

As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car, then it's close enough to autonomy to count as "closer" and to be useful.


"Anyone can use 3d mapping and geofencing" - that shows you their limitations and also doesn't qualify for "completely autonomous" standard. - Completely means anytime (regarding weather conditions or time of the day), anywhere (no geofencing) and completely adaptive behavior to the permanently and randomly driving conditions humans deal with while driving. Pattern recognition software alone (A.I.) would never be able to match human driving performances.

"As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car" - the entire gig is way to expensive and requires "time travel" level of scientific achievements, which is 100% fiction and 0% reality.


> doesn't qualify for "completely autonomous" standard

No, but it does qualify for "closer to reality today than they were 5 years ago"

> Pattern recognition software alone (A.I.) would never be able to match human driving performances.

That's okay. A trained human can do much better than necessary, and geofenced pattern recognition software doesn't have to be as good, especially because it should have better reaction times and braking force than a human.

> "As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car" - the entire gig is way to expensive and requires "time travel" level of scientific achievements, which is 100% fiction and 0% reality.

Why?

If you can run a fleet of 300 cars with 30 people, that's already enough to make tons of money once you get well-established. You don't need any scientific improvements for that, let alone the ones you're exaggerating.


"No, but it does qualify for" - please check the statement my comment was responding to. The "1 step forward, 3 steps back" way the automation sector does R&D is not moving towards reality, is moving towards confusing the public to justify their pitch to eventual investors. "That's okay." - Maybe for you, but not for investors and for the market. "Why?" - It's unsustainable, requiring resources (provided at this point by naïve investors) that commercialization can't provide. Just look at the over $100 billion wasted on this hallucination with zero actual returns. Investors expect palpable returns, not promises and delays.

> 1 step forward, 3 steps back

What are the steps back?

They're slow but they're improving. And they don't need to reach their original lofty goal.

> "Why?" - It's unsustainable

Sorry, the "Why" was directed at the level of scientific achievement you claim they need.


"What are the steps back?" - every step forward, no matter in which direction, requires more computing power from a limited computing source that gets power from a limited power source (limited because they are mobile not plugged to a network). By using more computing, the system would prioritize towards the "step forward", allocating less resources to other processes (other sensors or the new electronic system of that vehicle). More computing power (when more essential processes get to have better performance) is requiring more electricity, from a solely electric vehicle with a limited battery capacity, that ultimately would generate shorter battery range available. The more computing power and more battery power you add on any vehicle, the more you increase the vehicle manufacturing or acquiring costs.

"the level of scientific achievement" - every single step, every single minute and every single individual (the financial input), is prohibitively expensive for this R&D project, and it is not justified by any means by the results (the financial output), Companies and investors don't care about progress. They care about profits, and, in case progress would stay in their path to make profits, they'll fight against it. You should check waymo salaries, hardware prices, operations costs, and fleet management costs. From operational POV, every mile covered by those vehicles translates into a price payed by the company, money that are not recovered whatsoever at this point. Vehicle lifecycle, insurance, maintenance, cleaning and the electricity used, adds up very quickly and could go as high as half a billion dollars per year - "Argo has about 1,300 employees and is likely burning through at least $500 million a year, industry participants say." (https://www.theinformation.com/articles/argo-ai-planning-pub...). Now remember how in business, any investor usually expects to make 10 times his or her investment, in this case (the Argo.ai example) meaning that the profits (after all expenses and taxes are substracted) to be around $5 billion per year. This is the reason why Ford decided to shut down Argo, which was burning half a Billion a year with no end in sight. To directly address your statement - the scientific level needed would require way too much money to justify the road to accomplish it. Basically, all those parts interested either do not have those money, or are part of a business model that requires substantial returns on a relatively short term, and cannot afford to finance projects with constantly moving delivery dates for fictional ideas.


It exists now because you live in a place with good weather and regulators who are willing to put the live of other road users at risk for the sake of your toy.

> regulators who are willing to put the live of other road users at risk for the sake of your toy

What about the regulators that allow drunk and distracted drivers everywhere


> What about the regulators that allow drunk and distracted drivers everywhere

In what places is drunk driving legal? The laws exist and are rigorously--if imperfectly--enforced everywhere I've ever lived.

Are folks building transportation businesses employing drunk drivers?


People can drive under the influence, over the speed limit, distracted or with road rage

Vs autonomous driving where none of those impediments come into play


Irrelevant because drunk driving, speeding, and driving distracted are illegal and frequently punished. AI performs similar to drunk drivers yet is an unregulated open public beta that is sold as a product.

This is not news. He is saying that for at least two years now - https://reason.com/video/2020/02/24/george-hotz-fully-self-d...

That's what he is talking about. There are different standards for safety for experimental planes and commercial jets. And he is much better in building former than latter.

Please read the details of Howard Hughes's test flights (also Winston Churchill) and realize that perhaps it's people like that who are required to make anything work.

If planes were designed by safety committees we would have never figured out how to make them light enough to fly.


No one is saying people cannot experiment. We'd just not rather be the unsuspecting rats in Tesla's try-not-to-kill-too-many-pedestrians beta.

Well personally I'd rather not be any of the 1.35 million annual road deaths. It wouldn't make a huge difference to me if I was one of the 0.000001% got by a Tesla or one of the other ones.

If you want to help US pedestrians you'd be much better going after SUVs and pickups https://www.webmd.com/first-aid/news/20220318/turning-pickup...


We can do both. And based on interventions per mile driven I'd say Tesla's are a whole new level of danger.

> It is absolutely the wrong attitude for self-driving cars or anything that is safety critical.

With your attitude, we would never have allowed cars in the first place.


Compiler development is (often indirectly) safety critical too.

You do understand that this is joke, right?

None

It is funny to read top comment and then your comment. The thing you are admiring about him is the same thing top commentor is criticizing.

Comma was always riding the tailwinds of the underdog effect.

When Tesla ships an autopilot on mass market cars that fails in edge cases, commenters are up in arms that it wasn’t tested to perfection in every scenario. Big companies are punished if they don’t deliver perfection.

When an underdog company hacks together an autopilot proof of concept and takes a reporter for a ride with it, they’re heroes for pulling off a technical feat like that. Underdog stories will always draw applause.

The challenge with a company like Comma is that they can’t maintain underdog status forever. The product is very impressive in the context of an underdog hacker success story, but outside of a few early magazine shootout wins it just can’t hang with the efforts of the big companies throwing huge budgets at their own solutions. This puts them in a difficult spot because the underdog-hacker story can’t scale forever.


Is this why Argo AI (legacy funded startup from the ranks of CMU, system set them up for success) just shut down their entire company and Comma is still selling devices? No? Ok.

I'm not sure what you think this proves, but Argo's shutdown is indicative of just how difficult driverless is. GP's comment is entirely about how Comma needs to mature beyond the hacker underdog shipping some devices to DIY tinkerers, and that's an incredibly challenging task.

If anything, your comment is proof.


It proves that comma is making money today by being an “underdog” while Argo never made a dime and is now out of business by being the opposite of an underdog. Would you rather be an underdog or be unemployed?

Can't say I care much about what "George Hotz" is doing, but this line struck me as particularly silly:

> And even if you are an atheist, you probably still accept the bible is the closest thing we have to a human origin story.

I can't imagine why he believes so. Apart from the modern materialist origin story, what about all the mythical ones from other cultures? Of all of them, this is my favorite: https://www.youtube.com/watch?v=ckiNNgfMKcQ


“we are bytes in God’s computer”

from his Lex Fridman interview.


None

People should get to know Spizona's ideas. Note: I support all peaceful religions. Spinoza's ideas are close enough to this notion: that God is also Nature; for Spizona, we're not bytes in God's computer, we're literally part of God's (infinite) mind -- Nature itself. This is a really beautiful interpretation of Christianity, and absolutely compatible with science and all modern knowledge. God is hence all knowing, and immortal; God loves all existences (which are all themselves pieces of God). I think if religions are to survive healthily (and contribute to everyone's lives), they should become compatible with science and truth.

The greatest implication is that we are responsible for good ourselves, and if heaven exists, something like us (or our descendants, or other beings entirely) it's up to us to build it (which I couldn't agree more). We should start now!


So call it Nature, why do you need a separate theological construction to realize all of nature is part of the universe which all came from the same start in the big bang and we should all respect each other? Is there really a need for anything else?

The idea is metaphysical -- you're literally a fragment of a greater mind. Like, a cell should respect its host organism -- it's basically redefining what or who you are.

I don't know if there is an absolute need for something else, although clearly you do need some ethical framework to make decisions. Assuming you're a self-interested cell can have definitely different implications than assuming you're literally a tiny part of a whole (even if you are in favor of self-interested cooperation: this redefinition has many further implications). You could also assume you're are a cell, but decide to axiomatically value the whole as well. One way you could justify this axiom is through parsimony -- saying it's simpler than only valuing the self. Another justification (without appealing to Spinoza's definition) is that self-interested cells will fail in certain situations, s.t. whole-valuing cells can go much further -- that is, self-interested cells can turn into cancer or just don't survive as well as a multi- or omni- cellular organism. People already value their children and other people, this is a mostly logical conclusion.

This isn't just academic: we're going through crises right now that would be very different if people were less exclusively self-interested, and humanity's survival may depend on it in some way.

I like to put it this way: Love is optimal.

I think ultimately most ethical systems converge (to fundamentally valuing one another), so in a way some differences are aesthetic, how you prefer to think about it, etc. I think it's nice that we have more than one way of seeing those things coexisting peacefully (although like I said, I think it's very important they are consistent with the truth, at least for the most part).


Sounding a bit dismissive, you just said what I said in more words. My question is why you need god in the discussion, I think it's perfectly fine to have that awareness of being part of the whole without any god in the picture. It can sound like just semantics but for many people once you use the word god, there's a lot of other meanings.

Yes, it's a little difficult to picture (or maybe you might fundamentally disagree) -- but it's like there a literal being, God, that constitutes all of this (I also associate this with the idea of an 'omnimind' -- that all consciousness are literally a part of a same mind). Also, the coherence with Christian (and maybe Abrahamic in general) traditions makes it an interesting interpretation of ethics in the sense of there existing a God in a literal sense (and other premises of those traditions being consistent in this framework). I recommend reading or taking a course into Spinoza if you're interested, because there's more than I can describe without writing a little book (although I think some of the conceptions I have are fairly original -- Spinoza afterall lived in mid 17th century).

To get a feel for an expanded mind, think about your cellphone (or PC or google, etc.). Some of your memories are stored in your cellphone. So in a very real way the digital world is an actual extension of your mind; think how some of our organs (like hair, or maybe cartilage etc.) are not living tissue yet we consider them part of 'us'. The definition of 'I' is in a way functional, and maybe more completely related to agency and information coherence. In a literal way also, Nature doesn't distinguish individuals. What is 'you' here is, as far as we can tell, a large amount of flowing interactions, which are not singular. What is 'you' is influenced by an entire past extended lightcone; what is 'you' does not live in a single moment in time, but is indeed distributed in time and in space -- 'I' is an abstraction over a cosmic soup -- a very useful, interesting, and important one, but nonetheless it isn't physically fundamental. What we are fundamentally changes in time as well -- you are fundamentally a distributed network of events, not a singular entity. In this way the fundamental distinction of individual seems to fade, and validate a greater distinction -- the maximum distinction is the one encompassing all that exists (i.e. Spinoza's God). In a way, you might call it just another way of seeing things -- just like the 'I' or the self -- but it can in the same way be interesting, useful, etc.. And again it fits the traditions in a way (this is also an insight of Buddhism but the interpretation is different).

Derek Parfit discusses some of this here:

https://www.youtube.com/watch?v=uS-46k0ncIs


I don't think Christians would agree that God is Nature. Christians believe that God created nature, but that he is outside of it. This is in contrast with some older religions that do worship nature. It is not a coincidence that Christian God is "the Father" while nature is "Mother Nature".

I stopped reading when he brought the Bible into things. I have no time to waste on that sort of thing.

You have no time to waste on western civilization's foundational text?

edit: to the people who want to argue this: show me a body of work which has been more influential on western society. What people seem to identify as 'western' values are largely just judeo christian values. The influence of the bible is so ingrained into our society that you will live out the values within it even if you don't realize it.

And that kindof makes sense, doesn't it? Some guys wrote down some stories which at the time were being passed orally, these stories became really popular, and the society which was telling these stories and using them to pass their moral philosophy became prosperous.

The Catholic Church was a (not so)pseudo government in Europe for almost 2000 years, and was the leader and authority in philosophy and art. The idea that somehow none of this is foundational to western civilization is absurd and completely ahistoric.


You mean the Odyssey?

Not everyone eats everything Jordan Peterson says. The values you say are 'Christian values' are expressed, (in many cases more eloquently), in classical Greek texts. Values like democracy itself are indeed much more prominent there than in the Bible, which is why I imagine you're seeing the sort of replies that you are.

This is so frustrating to me.

People hear "the bible is foundational" and I think they're hearing "the magical stories in the bible are literally true. God sat on a cloud and poofed western civilization into existence!"

What I'm saying is that, the Dawkins sense, The Bible was an extremely successful meme. I don't think that's even up for debate.

>Not everyone eats everything Jordan Peterson says.

People need to find more internet people to hate. Christopher Hitchens was saying the same thing to impressionable teenagers in the early 2000s, and so was Richard Dawkins. Before that there was this little known German Philosopher saying something similar too: https://en.wikipedia.org/wiki/Friedrich_Nietzsche#Death_of_G...


Just because a book and an institution were present for the last 2000 years doesn't mean that they were the foundation of a society that exists today. Especially a society that most likely exists despite of them then because of them.

I think the 30 Years War, which tore apart Europe for decades over a disagreement over details in the interpretation of Christianity (and the Protestant and Catholic bibles differ).

I would say at the very least the bible is one of the most important pieces of literature in the world, has had a tremendous impact in all its forms, played a role in the formation of the modern world's moral codes, and at the same time, is not the unique, one, and only "foundational" document leading to the modern western world- there isn't one.

(FWIW, I really do respect the King James bible as a world of literature, even though I am agnostic and consider the Silmarillion to be a more enjoyable origin story)


Namedropping Peterson when he has 0 relevance to the conversation at hand is just juvenile flamebait and a pretty easy tell that the GP isn't really acting in good faith. I don't think @matl is interested in a serious conversation about the historical relevancy of the Bible, nor are they equipped to have one.

> Namedropping Peterson when he has 0 relevance to the conversation at hand

He absolutely has relevance considering he's the most recent prominent public figure pushing the Bible as "the foundational stories" without which we cannot live. Yes, there are others and it may be that OP arrived at this independently, but I've heard this most from his followers.

I acknowledge that I may've been slightly nudged by OP's unfounded "You have no time to waste on western civilization's foundational text?", which sounded extremely judgmental to me, but shouldn't have jumped on the bait.


He really doesn't though. Just because you personally make company with his followers doesn't mean anyone else on this site does. In all my time here I don't think I've _ever_ seen someone take his side, I've only ever seen him used as a weird strawman in the same fashion you are.

I think it's a pretty dull tactic to namedrop e-celebs you don't like as a very crude way of making a point. Clearly you have some fairly progressive views, so it'd be a bit like if I brought up other e-celeb clowns like Hasan or Destiny specifically so I could make a bad dunk. That kind of discourse just doesn't have a place on HN, imo.

I don't doubt you can think of better ways to articulate your points, and I'd encourage you to do so.


As I said in my previous reply, I don't disagree I could have framed my response better, however I am not just throwing Peterson here for fun, I am reacting to "You have no time to waste on western civilization's foundational text?" which is in my opinion a non-constructive argument of the sort that the likes of Peterson use to make people who disagree with their arguments feel like they're just not on the same level if they're missing the brilliance of the Bible.

I came across this sort of thing a lot in my life from Christians trying to convert me so I don't necessarily have the same amount of patience as others might to these sorts of tactics.

Note that I'm not even arguing against reading it to inform oneself, (something I've done multiple times), just this particular tactic of trying to get people to read it.

That's all.


I think it could only ever have been such a successful and resilient meme because it resonates deeply with human social dynamics.

> People hear "the bible is foundational" and I think they're hearing "the magical stories in the bible are literally true. God sat on a cloud and poofed western civilization into existence!"

That's not what I hear. What I hear is more along the lines of "What's in the Bible is somehow original/more profound than what came before and so THAT is the book to take note of", which just isn't true.

> People need to find more internet people to hate. Christopher Hitchens was saying the same thing to impressionable teenagers in the early 2000s, and so was Richard Dawkins.

Yes, they all pander to similar audiences be it back when Hitchens and Dawkins are more akin to Sam Harris imo.


Agreed, that is what I hear as well. It is kind of like Neo-reactionary gotcha, like ok, you are not Christian, but you still need to accept the bible as being really important.

That being said, there are lots of Bible references in the canon of English language literature. So if ones goal is to understand that, then a synopsis of the Bible stories and commonly quoted phrases might just be sufficient. The book is dry, repetitive and a bit inconsistent. It's value is that it is really old and is associated with a big religion. If a modern writer created the same thing it would have a really low rating on Goodreads.


Euclid's Elements is the foundational text of western civilization, popularizing reasoning from first principles. The Bible is a foundational text of western mythology, not much different from the mythological texts of any other part of the world.

You will find no mention of The Bible or any of its characters in The Federalist Papers. You will find much discussion of classical governments and several references to Montesquieu. John Locke specifically cited Euclid as his influence and structured his writing to match. Lincoln, perhaps the most stridently moral of America's presidents, was known as the village atheist when he was a young man, studied Gibbon's The History of the Decline and Fall of the Roman Empire, which put much of the blame on Christianity, and Paine's The Age of Reason. He carried Euclid's Elements with him until he could derive any of the propositions in the first six books at will.

It is only by the power of the ideas that Euclid's Elements (written in Alexandria) survives today. The Library of Alexandria was partially burned when Caesar set fire to the fleet in the harbor, which spread to the library and destroyed some part, to Caesar's embarrassment. Later, Christian zealots under Theophilus destroyed 10% of the collection. Finally, Muslim zealots under Caliph Omar destroyed the rest. Nobody looks to Theophilus's or Omar's governments as models of good government today.


> to the people who want to argue this: show me a body of work which has been more influential on western society.

A potential competitor would be Ancient Greek Philosophy (Socrates, Plato, Aristotle). Though it is hard to measure.

> What people seem to identify as 'western' values are largely just judeo christian values.

Nope, 'Judeo-Christian values' is a horrible Christian-Americanism that seems to erase the vast differences in Jewish and Christian experiences.

Judaism is a tribal religion with a huge focus on commandments given to said tribe and understanding the texts. Christianity is a religion concerned with the beliefs of all people, including non-Christians. The structure of their societies are quite different and historically Christian countries did not treat Jews well.

So I would add 'Judeo-Christian values' to the list of absurd and completely ahistoric ideas.


'Judeo-Christian values' is actually phrase coined by George Orwell, but it's clear that you're much better studied on the topic.

Any Christian or Jew will obviously acknowledge the many differences between the faiths, but I'm confused that you seem to be saying there aren't HUGE similarities in their value systems. Do you have many Jewish or Christian friends? I feel as if your understanding of Jewish/Christian culture is much more based on internet comments than lived experience.

If you want to understand, I encourage you to spend some time with devout Jews or devout Catholics and you'll begin to see the similarities.


> 'Judeo-Christian values' is actually phrase coined by George Orwell

I didn't know that, but this phrase seems to be common in the US. I have not encountered it much elsewhere.

> I'm confused that you seem to be saying there aren't HUGE similarities in their value systems

I don't deny there are many similarities, but my lived experience is that the core values and tenets of the two religions are quite different. Maybe you have a different experience.

Judaism also has many similarities with Samaritanism and Islam. So why don't we speak speak of Abrahamic values? My point is I am suspicious of the term 'Judeo-Christian values'.


Ultimately I agree, Abrahamic is a much better term.

Not for current day issues no I don't. I've certainly read parts of it and have watched numerous documentaries, museums, etc on it. From a historical point of view it's of course valuable. But it's not valuable to me for current day issues any more than flat earth theories are or outdated science textbooks.

You missed that he ends with a quote from Bronze Age Mindset. Wild.

Check out the background on this book if you're not familiar: https://en.wikipedia.org/wiki/Bronze_Age_Pervert.


It must take a certain level of arrogance to believe that the most famous and widely referenced text in all of human history is a waste of time to a mind like yours.

I'm agnostic as well, but I think it's an ignorant position to take to pretend that the Bible hasn't had an enormous influence in the continued story of humanity.


Regardless of what you believe, it's an important piece of literature that is interwoven with our history. People who let it cause thought-termination are missing out.

Sounds like one giving into 'watchmaker analogy' https://en.m.wikipedia.org/wiki/Watchmaker_analogy

An origin story is a mythology, and all of those other cultural stories fit under a common schema, hence the reference to Campbell’s “Hero’s Journey”

An origin story is how we came to be where we are.

Atheists think along the lines of evolution / natural selection. Not mythology. Before that, tons to be read, all of it science based.


Science is based on mythology. Logos developed out of mythos. There is no dichotomy or conflict between the two.

So, people aren't really understanding the motivations that Geohotz has for looking into this stuff. He is being a bit cryptic about it.

Here is a basic summary though.

The atheist world, of no life after death, is scary. We have not invented immortality. And even if we do, well there is an even scarier thing called entropy, that will end the universe anyway. There is no escape from death, as far as we know, and this is a problem that faces every human being in the world.

Now what? What do we do now?

Do you just lay down and accept that? Do you just sit down and die when your time comes?

Or do you fight?

George seems to have chosen the fight route. And to do that, well, our options are pretty limited. It involves looking into things that likely are pretty fantastical.

Things like "are we living in a simulation, and is there a way to break out of the simulation, to a universe where entropy doesn't exist". Or, alternatively "does the bible talk about any magic, hidden portals".

And no, this doesn't involve just praying to a sky fairy, and hoping things work out in the end.

Yes, these solutions sound pretty fantastical. But what other options do we have? Because if the current atheist world is the real one, then we are doomed. We are in the death world. Game over. Thats it. The most important problem, ever, is unsolvable. And we are all going to die, and nothing other than this fact matters.


Oh I don't deny any of that. Heck, just look at the link in my profile: https://www.lifeismiraculous.org/

The silly part to me is his claim that everyone (including atheists) thinks that the Bible is the best answer.


> But what other options do we have?

Well, most of modern society has decided that once you've satisfied your basic needs, hedonism is the answer. Travel, good food, good times with friends, etc. That's what most people are living like and what the culture creates. No need to find hidden portals.


There’s only one outcome for hedonists. Wireheading. I suspect many people will go this way, I won’t.

Most people cannot ever get to that, hence the whole "enjoy the journey not the destination". When I read your post the main takeaway I got is you made too much money.

Tell me if you started from scratch right now you'd be worrying about this 2deep4u shenanigans instead of doing cool shit again?

By the way, I really respect your work and think you've done great things and enjoy reading your rambles a lot.


Death is only a really important problem for individuals, and only their own death at that. The fact that we exist today as sentient creatures able to contemplate their own mortality is entirely due to the fact that those who came before us died.

> Death is only a really important problem for individuals, and only their own death at that.

Unfortunately, every single person in the world is an individual. Every person faces the same problem.

> The fact that we exist today as sentient creatures able to contemplate their own mortality is entirely due to the fact that those who came before us died.

Indeed that is the case. But what does it matter to those people who came before us? They are dead. They lost the game of life. They didn't escape the simulation, or go to heaven, or any of that. They are decomposing in the ground.

Just like everyone else is going to lose eventually.... unless we fix it.


> Because if the current atheist world is the real one, then we are doomed. We are in the death world. Game over. Thats it.

All evidence points that this is true so far. What is wrong with that? It doesn't mean life doesn't matter. This passage from Carl Sagan's Pale Blue Dot comes to mind: https://www.youtube.com/watch?v=wupToqz1e2g


> All evidence points that this is true so far. What is wrong with that?

This is unfortunately an axiomatic philosophical disagreement.

For those with my viewpoint, of nothing mattering to you, once you are dead and gone, this is basically unfalsifiable, as are most axiomatic disagreements.

And the quotes and arguments that you link in that video, basically just come off as a nice sounding cope, made by people who are distracting themselves from the horror of the reality of our situation. It is an emotional play, and humans are great at tricking themselves with emotions.

I don't really blame people for that though. My perspective really is pretty terrifying. It makes sense why people would hide from it. And the only way that recognizing it helps me, is that I am informed about fringe technologies, like cryonics, that currently are pretty unlikely to work.


I don’t dispute that there is a lot fear bound up in the idea of death, but I disagree that accepting that our existence is finite all has to be terrifying. I actually find the idea finality comforting: It gives meaning to my life, and a bounded scope to my concerns (which is not to say I don’t care what happens after I die… until the moment I die I expect to have concerns about the future of those I care about, but I know ultimately there is a limit to my influence and responsibility).

On the other hand I view concerns about the afterlife to actually be somewhat depressing. So much time and energy from the limited time we actually have being spent on some fantasy that is completely unverifiable, and so much of it motivated by fear (of hell, emptiness, a vengeful god, etc).

To state my view point simply: The unknown is scary, concerning oneself with an afterlife (or singularity, life extension, etc) is a coping mechanism to avoid processing that fear, whereas admitting that our existence is finite is an acceptance that there are things we can’t know, can’t prepare for, and that we shouldn’t waste our time on.


>Because if the current atheist world is the real one, then we are doomed.

That kind of discussion always reminds me of a very short poem by Stephen Crane

A man said to the universe:

“Sir, I exist!”

“However,” replied the universe,

“The fact has not created in me

A sense of obligation.”

the second law of thermodynamics has never invoked any sort of feelings of impending spiritual doom, but more importantly even if it did that's hardly a reason to start retreating into fantasies, we're not owed a non-doomed universe, and not even a non-doomed individual life. And on a personal note I' rather take the embrace of the indifferent cold universe over jolly people reading scripture to me for all eternity because the latter sounds more like hell to me than the former but that's subjective I guess.


It's all a matter of personal preference, perhaps, right down to the aesthetics. One person's naturalistic universe where the garden is sufficiently beautiful without there being fairies living at the bottom of it is another's indifferent cold universe of Lovecraftian horrors beyond the ken of mortal men.

And you link Alan Watts... the most basic bro hippie shit.

Sounds like a case of shooting the messenger. It's not his original idea.

I must say props for the BAP reference

"I'm taking some time away from comma." means leaving the company entirely? Not stepping away from leadership, or taking a break?

The body text does have the vibe of totally leaving the company. But it's not clearly stated.

I'm a little sad overall, I was never really impressed by George until he did Comma and started building real things. I was mildly hopeful about the Comma bodies before seeing this.

Tinygrad/tinycude seem cool. Anything to squeeze more memory out of these GPUs.


To me it seems like after cracking the PS3 and working for sony is where he went downhill. He could never top that.

I hope his shine hasn't burned out yet. People seem to do great things in their 20s, and then at best ride the wave they set in motion. Actually creating a wonder from zero after you've clocked 30 is very rare.

This isn't a true stereotype. The average unicorn founder is aged 34.

https://www.bloomberg.com/news/articles/2021-05-21/what-s-th...


This is correct, I'm under 34 and I haven't founded my unicorn yet.

I'm 74 so I guess it's GAME OVER, huh? Oh well

Speaking of which, who is the oldest person on HN? Can't be me... can it?

There are definitely some Bell Labs/Xerox/IBM old heads in their 70s and 80s here.

I think there are some guys from ARPAnet days here.

> The data on the innovators reveal three initial characteristics. First, there is large variation in age: 42 percent of innovations came about when their creators were in their 30s, while 40 percent occurred when the inventors were in their 40s, and 14 percent appeared when the inventors were over 50. Second, there were no great achievements produced by innovators before the age of 19, and only 7 percent were produced by innovators at or before the age of 26 (Einstein's age when he performed his prize winning work). Third, the age distributions for the Nobel Prize winners and the technologists are nearly identical.

From https://www.nber.org/digest/dec05/great-inventions-come-late... which is non-technical summary of https://www.nber.org/papers/w11359

At least considering Nobel prizes (very impactful "inventions") and "outstanding technological innovations", it seems best age is between age 30-50.


Source: trust me bro

How does one form such a misconception? What did you read or watch?

As long as he keeps on with the tinygrad Twitch streems I don't care. I got far more value of him doing live programming than $5/month.

Apparently there is an archive of it on yt

https://www.youtube.com/c/georgehotzarchive/videos


It strikes me as surprising that George Hotz is religious. Very few people I’ve met with similar personalities are.

why is it surprising?

He explained in his second sentence

I believe the OP answered that in their second sentence.

> Very few people I’ve met with similar personalities are.


I've seen this change dramatically in the last few years. The people I knew 10 years ago who were the sortof frighteningly brilliant hackers have almost all become religious (mostly traditional Catholics).

I'm not saying this is an objective truth, just that this has been my experience.


I think part of that (specifically Catholic) stems from the fact that it's one of the few religions that is compatible with a scientific understanding of the world, where science is seen as a worthwhile study of creation, and not something that's dangerous and evil (like many modern fundamentalist groups seem to think).

The danger IMO comes when people tie their religious belief too strongly to political movements, or think that just because there is doctrine around a certain aspect of human culture/life, there is no questioning it or investigating it further. A healthy skepticism (especially over one's own beliefs) is central to a good life IMO.


Note, though, the prevalence of specifically traditional Catholics. Traditionalists tend to reject in whole or part modern Catholicism's ecumenical angles, and embrace the black and white they perceive Catholicism used to have. It's a curious mixture of using one's own judgement to decide what of the past are worth following, and a steady belief in doing and believing things of the past, whether you understand the reasoning or not.

> I think part of that (specifically Catholic) stems from the fact that it's one of the few religions that is compatible with a scientific understanding of the world,

I went to Catholic school growing up, and I did not see anything in religion classes that involved updating models of the world that conflicted with data. In fact, it was the exact opposite.


I mostly agree but also went to a Jesuit high school and there were some super enlightened Jesuits. So a bit of a mixed bag.

What are some examples where the data is in conflict with modern Catholic teachings?

I have not kept up with modern Catholic teachings, but I recall the idea of infallible sources or truths and that conflicts with the scientific process.

A religion compatible with the scientific understanding of the world would be one that is open to amending its assumptions at anytime. At least that is my understanding of science.


In my experience, "Data" is just a devout reverence for statistics. It claims to be a science but, like other modern belief systems strays into trying to be the "source of truth & the end of truth".

https://en.wikipedia.org/wiki/Dataism

A weird pseudo-religion.


Few more years until TempleOS stage?

My pet theory is that people who exercise enough control over others/their environment get "bored" with it and want to feel like there's something out there bigger than themselves that they don't have dominion over.

Alternately, once you've had your fun with the more-complicated aspects of human technological advancement, you arrive at the gordian knot of religion-- the one construct nobody has managed to truly reverse-engineer. (I think this is what attracts the schizophrenics-- they believe they've succeeded where everyone else fails.)

To your point, Catholicism is very binary (pun intended) in terms of ideology. Little wonder it attracts the computer-minded.


Peter Thiel / Curtis Yarvin adjacent folks have in recent years heavily pushed these kinds of views in certain parts of the tech scene, Hotz tends to quote them frequently. It's not really religious Catholicism as much as an aesthetic for reactionary politics because you're more likely to find many of these newly minted Catholics at an 'Eyes White Shut' party than at mass.

His syncretism of Christianity with Bronze Age Pervert in this very blog post should immediately remind people of Umberto Eco:

"One has only to look at the syllabus of every fascist movement to find the major traditionalist thinkers. The Nazi gnosis was nourished by traditionalist, syncretistic, occult elements. The most influential theoretical source of the theories of the new Italian right, Julius Evola, merged the Holy Grail with The Protocols of the Elders of Zion, alchemy with the Holy Roman and Germanic Empire. The very fact that the Italian right, in order to show its open-mindedness, recently broadened its syllabus to include works by De Maistre, Guenon, and Gramsci, is a blatant proof of syncretism. If you browse in the shelves that, in American bookstores, are labeled as New Age, you can find there even Saint Augustine who, as far as I know, was not a fascist. But combining Saint Augustine and Stonehenge — that is a symptom of Ur-Fascism."


A lot of these guys (not Hotz but the Moldbug-adjacent Orthosphere bloggers) were big fans of Eastern Orthodoxy a decade ago, not sure how that trend diminished while tradcath got so big, especially with the current pope being a conciliatory figure.

Fwiw, Bronze Age Pervert (BAP) is actually a classics scholar named Costin Alamariu, who holds a PhD from Yale and wrote a dissertation on the tyrants of antiquity. He puts on a Slavic accent for his podcast, but he moved to the US as a boy and speaks perfect American English.

BAP wrote a book popular in right-wing circles called Bronze-Age Mindset. In the late teens, it was reviewed in a couple of the usual places by conservative figures. BAP has lost a lot of steam since then, and it doesn't look like he'll lead his group of Frogs and fringe-heads to direct action, even though he fetishizes the warrior class.

BAP and his fellow travelers were early believers in Trump and grew disillusioned. They were consistent champions of Putin as well -- after all, he appeared topless on a horse and learned karate. I can only imagine that Russia's military failures in Ukraine have made them seek other idols.

The good thing about BAP is that he knows his sources. He has mastered political philosophy and read the original texts to a depth and degree that jokers like Yarvin cannot aspire to.

Like Yarvin, BAP wants a strong man. He does not believe in democracy. He does believe in racial superiority and inferiority. While I happen to disagree with him about a lot of issues, including those, for all the usual reasons, if I were to engage him on his own terms, I would simply point out that authoritarian regimes like those he idolizes have really obvious weaknesses.

Those include the inability to maintain high-quality, transparent communication under a punitive regime, and the inability of a kleptocracy to inspire the kind of collection action that wins wars.

BAP is chasing a pipe dream.


What does it even mean to "believe in democracy" and is it a moral failing not to do so? Putin was elected democratically and still has a strong majority supporting him.

"Believing in democracy" means believing that granting real power to all adult citizens through voting and other forms of feedback is a better form of government than those governmental forms that only grant power to a few. Say, to the members of the central committee of the Communist Party, as in China. Or to a tiny group of oligarchs, apparatchiks and propagandists, as in Russia. Or to a royal family and its courtiers, as in pre-revolutionary France.

Democratic leaders can easily fall due to scandal. Authoritarian figures cannot.

China can commit the enormous mistake of its zero-COVID policy precisely because its leaders will never face a reckoning at the ballot box. Russia can stagnate in kleptocracy for the same reason.

Vladimir Putin, a former KGB officer, has controlled the Russian Federation for 22 years, starting in 2000. He has signed laws that will allow him to remain in office until 2036, at which point he'll be 84.

His chief opponent, Alex Navalny, is imprisoned in Russia. Navalny's party is not allowed to participate in presidential elections. Navalny survived one attempt to poison him, and another attack in which he lost 80% of the vision of one eye when an assailant sprayed a chemical on his face.

So sure, you can believe that Putin was elected democratically if you want. He'll be elected democratically until he can no longer force Russia to elect him democratically. He resembles an African president-for-life like Mugabe much more than he does a democratically elected leader. If you believe Putin's claims, it would be useless to debate it further.

https://en.wikipedia.org/wiki/Poisoning_of_Alexei_Navalny


> "Believing in democracy" means believing that granting real power to all adult citizens through voting and other forms of feedback is a better form of government than those governmental forms that only grant power to a few.

That's a false dichotomy. In a western democracy, do you vote for the things you believe in, or do you vote for one of the few representatives that pretends to approximate your beliefs closely?

> Democratic leaders can easily fall due to scandal. Authoritarian figures cannot.

Is that true? What if the people likes authoritarians? What if it doesn't care about scandals? What if it democratically decides it doesn't want "free and fair elections" anymore? If the majority wants "the wrong things", the outcome will be poor. Democracy is not the distinguishing factor, it's not inherently good or bad, functional or dysfunctional.

> So sure, you can believe that Putin was elected democratically if you want.

The proceedings may be illegitimate, but there's no doubt in my mind that the Russians would have elected Putin over Navalny again. Of course there is an opposition in Russia, but it's not a majority and in any event Navalny isn't all that popular.


> That's a false dichotomy. In a western democracy, do you vote for the things you believe in, or do you vote for one of the few representatives that pretends to approximate your beliefs closely?

No it's not, actually. Democracy devolves power to individual citizens in ways that allow them to impact decisions about collective action. In western democracies, citizens have many ways of impacting collective action, which include speaking out and rallying others to their cause. It is common in places like Russia and China for citizens to be jailed or killed for speaking out about government policy. Therefore, those countries are undemocratic not just in how they approach voting, but about the civil rights they recognize and protect ... or trample.

> Is that true? What if the people likes authoritarians? What if it doesn't care about scandals? What if it democratically decides it doesn't want "free and fair elections" anymore? If the majority wants "the wrong things", the outcome will be poor. Democracy is not the distinguishing factor, it's not inherently good or bad, functional or dysfunctional.

Given that authoritarian figures control the media, information and feedback loops that are available to citizens in their countries, how would you even know what "the people" like? You can't know; you can only speculate. Authoritarian states systematically falsify voting results and other expressions of popular will. They obliterate the feedback loops because they are afraid of what people would say. That's one crucial property of an authoritarian state. To put it bluntly, you have no idea of knowing what people want under authoritarianism, and that should trouble you.

> The proceedings may be illegitimate, but there's no doubt in my mind that the Russians would have elected Putin over Navalny again. Of course there is an opposition in Russia, but it's not a majority and in any event Navalny isn't all that popular.

You have no way of testing this hypothesis. It's an opinion without evidence or the possibility of evidence, and I will treat it as such.

You should ask yourself why Putin is so afraid of running that experiment himself.


Not "believe in democracy" for this political subculture often means endorsing the revival of monarchy, which is what Curtis Yarvin supports. Many adherents have read Hoppe's Democracy: The God That Failed, which is somewhat of a seminal work on that thesis:

https://en.wikipedia.org/wiki/Democracy:_The_God_That_Failed


> Julius Evola

A guy who amusingly rejected the label "fascist", instead wanting to be called a "super-fascist".


I think it’s a weird counter culture where social conservatives can find refuge. The rationals given for being religious are found post-facto. It’s my opinion that religious elements are being recreated in atheistic society especially in Academia. By keeping religious elements bound to an actual religion, and especially to an old slow moving traditionalist religion, it keeps those elements out of the rest of society. I think not having religion is not an option due to the way some people are wired.

I've had a similar experience, but rather than a religion they've turned to homesteading and 'near-Amish' living-behavior.

It's kind of fascinating to me; it feels like another fork one can take in response to burnout -- something I know they all ran into at one point or another.


I’ve noticed that almost universally, a smart person embracing religion when they’re an adult, choose a religion which their family followed when they were a child. Also only if religion was a big part of their family’s identity.

I don't think this post indicates that he's religious. If you read some of his other stuff, he's clearly a person with philosophical interests. There are lots of people who read the Bible out of a philosophical/mythological/anthropological interest rather than a religious one.

The Bible is the fundamental book by which has driven many human accomplishments and debacles. You don’t need to be religious to understand it's impact in the modern human western society.

In my experience it's the opposite, Hotz comes across as someone with high-functioning mental illness (he mentions a manic episode in this post) and this can often include religious belief (along with delusions of grandeur). It's all anecdotal either way, and I'm never really comfortable with dissecting someone's religious beliefs or mental state much beyond a loose concept.

I don’t think it is your place to diagnose his mental state from public blog postings.

It's also not your place to decide what my place is. So it's all a wash really.

I’m surprised as well. In his SXSW talk a few years ago the conclusion was that he was starting a new religion with the goal of finding exploits in “the simulation” (our universe).

I did not expect to see the bible references here.

Also his presumption that we would think the bible is the closest thing we have to an origin story also suprised me. There are hundreds of origin stories that have been part of the tradition of different cultures throughout the millenia.

Surely this is true for most of modern Western culture, but it seems to hint that he’s gone down the concerning path of seeking mystical metaphors in the bible, to not acknowledge the above in a post otherwise about corporate management.


he's a big yarvin (mencius moldbug) guy, he's read entries from the gray mirror (yarvin's substack) on stream. people who were into nrx (neo-reactionary) figures like yarvin and bronze age pervert (see: bronze age mindset quote in the op) pivoted to a religious slant, taking up catholicism or orthodox christianity in the last couple years for what seems to be aesthetic reasons and/or resentment of the general state of things

this is just my perspective as a passive observer trying to be objective as best i can, so take all of this as you will (and at the same time who am i to question their faith? how can i know that george hotz wasn't actively religious prior to recent broader developments?)


Disquieting to hear such a brilliant and promising young engineer is so taken with Yarvin's hokey crap.

Are there are other prominent young tech people who are openly into NRX? I hear about Yarvin and his Thiel funding spreading themselves around, but I haven't any idea if it's actually taking hold in tech circles.


That is quite common, but what is really surprising is him quoting 'bronze age mindset'.

These things always move in cycles. Everyone wants to be a contrarian. Back when religion was the norm being an atheist was cool and rebellious. Now it's becoming the opposite. The same logic applies to many other areas.

George Hotz is my canary for the question "is there still adventure to be found in computing". I hope he finds some, because if he can't, then there is probably none.

That’s one huge projected canary

I'm a fan of George Hotz. But don't be a cult

Excited to see what Hotz gets into next. An ounce of humility might do him some good, but i'd be hesitant to recommend it to someone like that and ruin his whole edge -- similar to Musk, I think that attitude is huge with regards to their personal effectiveness.

now a short story that your cult comment conjured in my head:

the year is 2122.

the Muskalonian robotic death squad approaches the last cave hold-out of Hotzite believers..

thankfully the killer robots pass by peacefully, blind to the possible engagement because the squad lacks radar and lidar, and the cave is dark.


You should get into sci-fi. I'd read this.

You're reading my post too optimistically. I am far from convinced that computing is a good volcano to leap into these days. The last innovations that have made me wow were the Wikipedia (2001), python (1991-) and mid-2000s progress in game engines. Over the last 15 years, all the changes have been rather incremental (languages, security, some OSs, game engines again) or disappointing (cryptocurrencies, social networks), with the exception of ML. Now, ML is a fresh breeze but it is lacking the "garage-ability" that computing has been famous for; good luck competing with the bigshots on that field. And there is still a great potential for disappointment as it is unclear how well it scales beyond the type of problems that have lots of data to train on or allow for automatic reward (RL).

I like your canary; it reminds me of the Terence McKenna quote "The artist's task is to save the soul of mankind; and anything less is a dithering while Rome burns. Because if the artists - who are self-selected for being able to journey into the Other - if the artists cannot find the way, then the way cannot be found.".

If the hackers - who are self-selected for being able to journey into the Machine - cannot find the adventure, then the adventure cannot be found.

Things which have made me wow in more recent years:

- Large scale aggregation of data, specifically when Google started using data from phones around the planet to overlay live road traffic data on Maps, or when individual businesses are typically busy on different days of the week. Live map views of lightning ( https://lightningmaps.org/ ) or weather ( https://www.windy.com/ ). Things a skilled programmer might build (a map on a website) but which can't work without a global network of sensors.

- More continuously active sensors, brought about by specialist circuitry and low power systems, e.g. a phone with raise to wake, step tracker, and which silence incoming calls when they detect unusually stressful manouevres in a car.

- JSLinux and v86 browser-based virtual machines. I know the tech of running a VM isn't particularly knew but the ability to boot a Linux/Windows VM with one click in a browser almost everyone has without needing a cloud/container instance behind it feels like it will bloom into a lot of new things over time.

- The first app to run on smartphones which did OCR of the camera, translation of the text, and live overlay of the translated text back on the picture on screen. I forget its name now, and now it's builtin to cameraphones.

- Going back about 15 years, but when content aware scaling came in - https://en.wikipedia.org/wiki/Seam_carving

- Something else which is scale related but not ML, when a cloud storage program like DropBox can hash a file on your local machine, send the hash to their servers, notice that someone else has already uploaded that file, and tag it into your account so you can 'upload' non-unique files without the time or bandwidth of actually uploading them.

- When DropBox live recompress JPGs using a lossless compression to save tens of petabytes of storage, then decompress them back into JPG as people access them. https://blog.acolyer.org/2017/05/01/the-design-implementatio...

- Internet traffic highjacking by 'bitsquatting' domain names which are one memory-bit-flipped-bit away from the correct name, and then using the incoming traffic to estimate the global amount of flakey memory and cosmic ray events happening: https://www.google.com/search?hl=en&q=squatting%20memory%20b...

- Deepfakes; they appeared as hype and then faded from hype, but the ability to synthesize another person's facial appearanace and vocal mannerisms on someone else is impressive.

- Drone FPS flying with VR headsets. Coordinated light-drone displays in the sky instead of fireworks.

- ML related, but iPhone accessibility features can describe pictures, live while using the camera ( https://youtu.be/UnoeaUpHKxY?t=39 ) or in the photos app or on websites. Or apps like Audible Vision (e.g. https://youtu.be/QiEKMTTwTZg?t=377 ).

- ML related, Stable Diffusion, being able to generate visual scenes from text descriptions.

> "Now, ML is a fresh breeze but it is lacking the "garage-ability" that computing has been famous for; good luck competing with the bigshots on that field."

Doug Miles is trying with LogicMoo ( https://www.youtube.com/watch?v=sdG6GVCwJrw ), trying to build an AI that learns inside a virtual world instead of using big data and ML techniques.


Never heard of seam carving before. Not sure if it's used anywhere, but what a fun idea it is!

Lepton is interesting (as is brotli), although it doesn't quite rise to the level of "opening gates to new worlds" that the innovations around 2000 did. Virtual machines were funny in that they implemented an idea from mid-century mathematical logic, but VMWare was founded in 1998 and cygwin (not quite virtual machines but similar) is even older.

Oh yeah, lots of innovation in spam, social engineering and trolling, but that's not computing per se :)

Well aware of the march of big data and "quantity becoming quality"-based services over the last 20 years. But Google Earth and Wikipedia started in 2001, and everything that came thereafter would mostly be less exciting and more closed-down. OpenStreetMap deserves a mention, not for innovation but for stealing the fire from the gods. Windy.com is a fresh breeze, too; good point.

Never found DropBox exciting. Even git, which I love and use for 3 different purposes every day, just doesn't feel particularly novel. Maybe that's because torrenting (with all its deduplication, hash-indexing and various other innovations) had set my expectations so high long ago that everything that came after looked like the Dark Ages.

Drones... now these are some new grounds. In hindsight, I feel stupid forgetting them in my comment above!


>ML is a fresh breeze but it is lacking the "garage-ability" that computing has been famous for; good luck competing with the bigshots on that field.

Not sure about this. Groups like Eleuther.ai or Stability have put out serious competitors to something like GPT-3 and Dall-e respectively without the massive capital that OpenAI has. HuggingFace also gives you access to virtually all the most cutting edge models and all you need to do is plug and play.


There is PLENTY of adventure to be found in programming.

You can start by building a Twitter clone that doesn't do moderation at all, but rather ranks things in a vector space. Scalar go/no go ratings are passe.

Imagine ranking a tweet you read by any scale of your choosing, True/False, Funny/Not Funny, Interesting/Boring, Incriminating/Proof of Innocence


People are too lazy to rate tweets already if they don't pique their emotions (usually by controversy).

Also the best innovation around Twitter would be to destroy it, which Elon Musk is thankfully doing right now.


The moods of a manic personality might not be a good canary.

This reminds me of all the "born too late to explore the world" people, who in reality have never left their house. There is limitless adventure right across the street if you actually want to go seek it out.

Goerge doesn't owe anybody anything. The kid's following his whims. What's wrong with that? I'd rather this than some long and drawn out stagnation of comma.

Big respect for this decision. If he feels like it's all planned out and only remains to see if it works or not (it's all about improving the current models and not about big changes in the codebase) I think it's good for him to focus on whatever he feels like, tiny corporation or whatever it will be.

I used to like watching his streams but he seems to have become somewhat unhinged in recent years. Perhaps he is believing too much about what people have said about him. The arrogance and aggrandizement was the turn-off for me.

Well that and his belief, while at comma, that how fast you can code is the most important metric when assessing software engineers.

Competition programming is fun to watch but a poor indicator of maturity or success. In many systems it doesn't matter how fast you arrive at an answer. Increasingly it becomes important to arrive at something close to the right answer and to be able to show your work and prove to others why it sufficient and suitable.

Still good for a hot take though. I hope time off will help him cool down a bit.

Programming all night in a do-or-die mode is fun and all until you get a bit older. For many people who lived this life they realize only too late that they didn't spend enough time making friends and building a community around themselves of genuine relationships.

Update: I should also say that despite the weird god stuff and what-not; a decision to leave a company because it has changed and you're no longer a good fit is a HUGE decision and one that is often not made by founders. Massive props for that.


"recent years"?

This guy has always been like this, going all the way back to ios hacking days or even wii hacking days.


A certain amount of arrogance is cheeky and endearing.

Then there's the kind where the talented musician in the band announces their departure and solo album. And starts talking about finding God and becoming a modern day Prometheus.

I can't put my finger on when the transition happened but it was somewhere just before the Lex interview for me.


He doesn't come off as particularly arrogant to me, more like a straightforward and direct type. I've yet to meet someone who's good at what they do who isn't convinced of themselves, so that's a given. A philosopher might argue you need this sort of mindset to be successful in the first place. However knowing that you're better at something than most other people doesn't equal arrogance when it's true. There are also a lot of arrogant people who know how to put on an act in public.

> I've yet to meet someone who's good at what they do who isn't convinced of themselves, so that's a given.

I don't think so. I know at least a couple of great engineers who were not so sure of themselves. They were just eager to try, over and over again. Both of them developed novel 3D engines. One formed a company over his engine, the other one taken on one of the best liquid simulation teams, head on, and gave them a good run for their money by being equal to the whole team by himself and his wife.

I'm also on the same bandwagon. I'm not sure that I'll come up with a great solution before touching the keyboard. I'll only say it's a good solution after finishing it and comparing it with other available solutions (if there are any) or similar ones.

I did great things, and I failed miserably too. In every case being not sure about myself allowed me to have a level head, and see through my solutions' shortcomings and iterate over them.

The only thing I'm sure about myself is, I can give a shot, and iterate over it as time and my abilities allow. That's all.


His playstation rap was on the video playlist for the Defcon ctf for many years.

I used to talk to him on one of those IRC networks for iOS hacking, forgot the name, but a good old friend of mine was involved in the scene, so I dropped in. He is definitely a skilled individual, but he lacks the key thing I think all true senior engineers need: humility. Without humility you're going to find yourself bikeshedding and wasting time and effort on things that may actually not be attainable within your lifetime with your current resources.

I personally don't see AI getting anywhere any time soon. Just look at some of the really funny images some of the best AI available generates[0]. Subtle things like this show you that AI is really a UX around a piece of software that attempts to understand what you're asking for with less input than having enough buttons to generate EXACTLY what you're asking for (I'm thinking of photoshop in the case of text -> image).

Some things in AI do impress me, but the fact its not far more pervasive in our every day (there are people who use voice assistants sure, but its not EVERY household doing it) lives tells me we still have a long way to go.

[0]: https://www.reddit.com/r/technicallythetruth/comments/y26t6x...


I think I have to disagree with humility in engineering. There's a certain arrogance intrinsic to engineering, in that you have to be capable of believing you can find a better solution to a problem than literally every other person in history. Obviously it needs to be tempered in interactions with the outside world, but without that core idea, an engineer can't do their job effectively, like a surgeon who loses their nerve when it comes to taking lives in their hands.

> There's a certain arrogance intrinsic to engineering...

I don't agree. You can take pride in what you've accomplished, and you should do that, but you can be humble at the same time. Accepting there will be better individuals than you or better solutions than yours is both sobering and motivating to be and do better.

When you accept that you're very smart, but not the smartest, you unlock the potential to be one of the the 10x-100x engineers others aspire.

I'm a very motivated individual, and did some pretty impressive things back in the day, and I'm still capable of doing these things. However, I'm not the best, or won't be the best for long. This is how the world works.

Being able to say thanks, and being able to say sorry goes a long way, and takes you beyond where unabashed arrogance takes you.

Lastly, arrogance makes you fragile. Fragility is not good under stress, and under real world circumstances.


That isn't the best example, because surgical error due to the stereotypical cowboy surgeon was common and measurably improved with better process, ie surgical checklists: https://pubmed.ncbi.nlm.nih.gov/24973186/

Lack of humility can lead to the wrong limbs being amputated. The parallels to understanding your personal limits and knowing when to trust other people directly applies here. Confidence != arrogance.


I agree that arrogance is common, but I'm not sure I'd call it necessary. I know a lot of good engineers who are very confident that they are good enough to build a great solution, but not arrogant.

I think there needs to be a mix of humility and arrogance. Humility because sometimes there’s an issue that’s unsolvable and a good engineer needs to know to pull back. Arrogance because it takes at least some arrogance to believe there’s a solution where others haven’t found one.

There are plenty of times in my career where I wish I were more arrogant. I’d probably be in a completely different place altogether if I were even slightly more arrogant and that lack of it has held me back in some ways.


Wouldn't "confidence" be a better word than "arrogance"?

Sorta, confidence is nice but arrogance is the difference between “I can do that” and “people have tried and failed but I know I can do it because I’m better.” There’s a particular example where I was arrogant (one of the few times) about something, said yes to the project that someone else had already architected, and I re-designed the architecture because I believed I was a better engineer. I need a little more of that. Particularly now, when I have an idea for a company I want to start but internally I’m scared. I’m confident in my ability, but I’m lacking the arrogance to just fucking go for it. I spend more time beating myself up over it than just doing the work. And I wish I just had the dab of arrogance to go for it, as opposed to just confidence.

Right, but humility is crucial in machine learning (and particularly in the area of self driving cars) where you're constantly calibrating uncertainty and risk with lives on the line.

You are onto something.

For hard tasks like engineering, you need a balance of ego (I can do this) and humility (this is hard, I need to work as a team and study the problem). Too much of either results in negative outcomes.


That applies to problems that can be solved by 1 person.

But huge majority of the problems require whole teams of people to solve, especially in the commercial world. Senior engineer who can 2x team of 10 people is better than a single engineer who is 10x himself.


> There's a certain arrogance intrinsic to engineering, in that you have to be capable of believing you can find a better solution to a problem than literally every other person in history.

As a programmer, I have never once believed that. I just believe that the problems I am being presented with are novel enough (usually because of the context and constraints surrounding it) to have not been encountered before and solved.

Just like you never cross the same river twice, I don't think you ever solve the same problem twice.

I think the core skill engineering requires is persistence: the belief that you can find a workable solution even when the problem itself gives you absolutely no positive feedback during the process.


You example is perhaps misleading. I tried to reproduce it using stable diffusion 1.5 and it’s possible using negative weights and some creative keywords but I doubt the title of the post is the actual prompt.

I can get some thing like this by adding context distractions. For example, "bear eating salmon in river" shows whole fishes, but "bear wearing tophat eating salmon in river" gives a bear (without a tophat usually) eating supermarket sliced salmon in a river.

I feel as if you are using a fallacy in your analysis. As AI models are just decision trees in essence and it's the same with human inference. That there is no magic and it seems mundane is the point. As our reality is surprisingly simple and these new models for artists and programmers alike demonstrate such.

It's moving the goal post simply. As it is the same with Google's Chatbot that "fooled," their own engineer. Thus it becomes a chicken and egg situation as to when AI truly passes the turing test. In the same way putting on a VR headset and being fooled you are someplace else the first few times. Then once you are used to it, it becomes as if you are changing your reality just like putting on new glasses.

Human beings have a hard time with exponential. And my guess will be next year that we have AI generated video that is indistinguishable from a human production. It's just software sure, but so are we.


> but the fact it’s not far more pervasive in our every day

The joke I heard was that AI only refers to technology in the future, and not the past. We use plenty of machine learning models every day:

- Every time you use dictation

- Every time you use biometrics, like face scanning, to unlock your phone

- Every time you use any sort of search engine, news feed, or see an ad

- It is an integral part of the field of computer vision now, so anything to do with scanning documents, using AR, taking photos, etc uses AI

It’s only getting better and more integral to more fields over time.


Some would call that "machine learning". I know it's hard to make a distinction, but the term "AI" is too ill-defined to argue about what constitues "real AI" and what is mere machine learning. Are those examples "thinking machines" (~= "artificial _intelligence_"? I'd say no, they are very good statistical pattern matchers without any understanding of the subject matter for the most part.

GPT-3 and image generators that have somewhat of a world-model are, imo, closer.


Intelligence is a Heap Paradox and trying to define where the boundary between Artificial "Intelligence" and "really good pattern matching" is a fool's errand. Intelligence is a continuum from the simplest bang-bang thermostat all the way up to the human brain.

> The joke I heard was that AI only refers to technology in the future, and not the past.

This is actually known as the AI effect.

https://en.m.wikipedia.org/wiki/AI_effect


This is a fair call out. I guess I'm more talking about more advanced things in the future though like robot maids like in The Jetsons.

Yeah, exactly what the AI Effect is about :)

> We don't use AI anywhere today

> But here in X places we do use AI

> Yeah, but not for Y

once Y is using AI

> We don't use AI anywhere today

> But here we're using AI for Y

> Yeah, but not for Z

And so on :)

From Wikipedia:

> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

> "The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet.

> Some people think that as soon as AI successfully solves a problem, the problem is no longer a part of AI.


Fair, though in my case (which is insignificant and out of the norm) I've always wanted AI robots, as a kid I loved watching Dexter's Lab and seeing him make robots to do things. I'm also okay with a really powerful AI assistant beyond what we have today. I tried using AI assistants a few times, but I always end up shutting them off for either privacy concerns or just get tired of figuring out the word soup to make them work. I really want to see an AI assistant that is insanely capable and fully offline first, so maybe something like Mycroft, not sure how fully capable it is without reaching out to the internet though, but I assume it has some capabilities without being online.

In my eyes, AI is something that could pass the Turing test without fail, ever because it knows how to communicate how we do, and think like we do.


I've seen it unfolding real-time with the famous Lee Sedol / AlphaGo match. On the day before the first match, people were saying how the match would be a joke, as machines would never understand the deep philosophy of go. A few days later, people were saying that it was a matter of calculating stuff very fast and the match was clearly rigged because Lee only had one brain while AlphaGo had thousands of GPUs. (Yes, someone actually said that. Facepalm.)

I personally don't see AI getting anywhere any time soon

Totally, like there's no way an AI company could hit $80M revenue in <2 years [0], or power the recommendations systems for billions of dollars worth of online commerce.

Also there's no way that it could generate audio/image/video/text that would trick the average person that this was generated by an AI.

[0] https://twitter.com/tszzl/status/1583357703337885697


These are all examples of scaling and miniaturization. It's impressive to be sure but nothing that wasn't already done. Fundamentally we're still running mostly the same algorithms and have been spending the last however many years optimizing and scaling in various areas from hardware to labelling and automation.

That's why "lol it's just spreadsheets and if-statements," is kind of funny: there's a grain of truth to it.


Just a note on humility: some people are taught it from childhood. Enough failures mixed with the steady hand of support renders humility (usually). If you have too many failures where you're the only one picking yourself back up it can render a non-humility that can get you where you need to go but in the long term is not very useful. Colloquially I believe this is reflected as having a chip on your shoulder. There's also lots of early success where failures are just forgotten, smoothed over, or not learned from that can render another form of non-humility. People in the latter situations can learn humility at adulthood but it probably takes other people, who are very patient, to help get someone there. I personally have at times worn the chip on my shoulder despite not needing it anymore and can attest it's difficult, but worthwhile, to shake.

Oh yeah. I've met more than a couple of engineers who spent their teens and twenties coding like that was the meaning to life, all suffer from deep regret, burnout, resentment, depression.

They are better than average technically, but their social skills fail them. Which in a paid employment situation carries a lot of weight.

I feel sorry for these guys


None

Who is "we"? Unemployed code wizards?

We are "engineers who spent their teens and twenties coding like that was the meaning to life, all suffer from deep regret, burnout, resentment, depression".

Who is you? Not a comedian?


>paid employment situation

Just don't get yourself in such a situation and you'll do fine :)


Yeah, sure. It's super easy to avoid gainful employment. Based on this comment I can already tell you're a genius.

If you ignore corporate and social propaganda it tones the difficulty down a bit by having the right mindset.

I've only been an employee at the start of my adult life for 3 years at a small software outsourcing agency. Been gaining a steady 20x the median salary for my city for the past 12 years since I've abandoned "employment". Can't say I'm particulary smart. I'm closer to Dumb and Dumber than Rainman. I just ignore opinions of other people or entities and go my own way.


> all suffer from deep regret, burnout, resentment, depression

Nonsense. John Carmack is the most obvious counterexample. But pretty much anyone that works in gaming.

While the stories of resentment of specific employers abound — your general thesis is nonsense.


I think John Carmack is amazing but also an outlier.

The intersection of the sets of developers who:

A) make it to their 40s and are still IC's

B) still work > 40hrs/wk on a startup for little-to-no-pay

C) are still happy and enjoying themselves

Is small. It's not empty though!

But I do think there are many people who think this is the ideal way to live or something they ought to strive for. And the set of people who try and realize later in life that was not what they should have been working towards do tend to burnout and have regrets.

It happens a lot to people who want to be famous. They don't think about what they would have to give up to get it: some measure of your health in later life (hello carpal tunnel, frequent recurring hemorrhoids, type-2 diabetes, heart disease, poor vision, etc), friends and relationships (read enough biographies and loneliness is a recurring theme) -- things they might take for granted. There are stories from people who did make it and became famous and wish they hadn't. There are more of those than the, "I wouldn't have given this up for the world," types I think.

Grass is always greener on the other side, as they say.

Update: also, Carmack hasn't been working at a startup since he took the golden parachute out of Id and could afford to build rockets as a hobby.


I'm trying to make it through your comment but get stuck at IC, asking the Internet gives me this list:

https://acronyms.thefreedictionary.com/IC


Apologies, given the audience I assumed it was a well-known acronym, my bad.

It stands for, Individual Contributor, and means "a programmer whose only responsibility is to contribute to the code."


> But pretty much anyone that works in gaming.

Game dev is packed full of young people and regret, burnout, resentment, and depression are perennial reasons that people state when they leave the industry in their late twenties and early thirties. Game dev is the canonical example supporting this thesis.

Yes, there are certainly exceptions, but as a broad comment, it aligns very strongly with what I've seen among many many programmers both in games and out. It's very hard to make the kind of sacrifices that lead one to spend insane hours programming. Some people do that because they just straight up love it so much. But most also have mixed with that some other harmful psychological motivations: a need to prove themselves, compulsive perfectionism, escapism, feeling that they are only worth as much as they produce, etc. If you just give into those instead of addressing them, the end result is often, well, deep regret, burnout, resentment, and depression.


The guys a fucking dumbass. Don't waste your breath. We work surrounded by these idiots on a daily basis. Baton down the hatches and kill on sight. The guys a retard.

I agree with the first statement but not “there are certainly exceptions” - because this seems to suggest that the vast majority of those in the industry (obviously there’s some variation from indie to AAA, but it’s doesn’t matter here) burnout, which just isn’t the case. In fact, my point was not to glorify the game industry but more that burnout is an intractable problem because the majority of those in gamedev, even the workaholics don’t burnout, however burnout is extremely common - it’s a similar situation in medicine.

Man, nonsense is just what you've spouted. One name vs hundreds of thousands afflicted from burnout and inept social skills with a challenging social reality in which they struggle to rebuild them in. I would seriously hate to work with you based on how much of a dumbass you definitely are.

The answer to a problem should be the right answer, and it should also avoid unnecessary complexity so someone looking into it can figure out what's going on. Avoiding unnecessary complexity sounds easy, but it often isn't. Banging out code as fast as you can is one of the surest ways to produce systems that are devilishly complicated and brittle.

None

Where does he say that how fast you can code is the most important metric?

He doesnt. There are a ton of videos which says what he specifically looks for. from their hiring page

https://www.youtube.com/watch?v=kNWTculMsTY "motivation and intelligence"


From the top of https://comma.ai/jobs:

> Competitors

> People who have done well at math competitions (USAMO, IMO, PUTNAM), competition programming (ACM, USACO, codejam, topcoder), science fairs (ISEF, STS), or capture the flag (DEFCON, secuinside, GITS). Those competitions don't just select for ability, they also select for quickness. We are in a very competitive space.


> Programming all night in a do-or-die mode is fun and all until you get a bit older. For many people who lived this life they realize only too late that they didn't spend enough time making friends and building a community around themselves of genuine relationships.

I feel like this is a false dichotomy that bad engineers tell themselves to justify not working hard at being a good engineer and instead screwing off.

Believe it or not, you can actually be a world class engineer and have a healthy life, who would have known? Weird.


Great engineers don't do all nighters frequently. Some of the best software engineers I've met (quantum computing for Amazon, ML for protein design, etc.), go to bed at reasonably early and get eight hours of sleep, exercise and eat regularly, and have downtime. My guess is this type of life discipline helps keep up productivity over the long term.

It definitely does. In what few empirical studies on what makes programmers effective, sleep, is the primary contributing factor to low error rates [0][1]. Stress and overwork are probably close seconds. And some of these studies are cross-sectional across knowledge workers; iow, not just programmers.

If you want to make it to your 40s and still be an effective IC you need to learn how to regulate yourself and follow a disciplined approach to work. Cognitive performance from chronic sleep deprivation takes years to show up.

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2656292/

[1] https://arxiv.org/pdf/1805.02544.pdf


Thank you for the links! Much better than my anecdotal observations :)

> help him cool down a bit

> geohot


Hotz is an amazing MacGyver of Code, but unfortunately (for his own expectations) seems to be limited to only that.

He organized and inpired a team and delivered a product, the product is still in development and their fate is still to be seen.

Nothing wrong with accepting you're a very early stage CTO type of person and to accept the situation isn't the right one for you.


George’s insights on self-driving cars remind me of when you get the rare exceptional engineer in your company who knows 100% what he or she is talking about down to the exact implementation level and all of the alternatives, and leadership totally ignores them because they foolhardily believe this kid can’t possibly be right—without asking engineering questions to test to see if they’re right about the engineer being wrong or having not considered something.

Except George seems to have been right to the tune of others spending billions of dollars while this dude hacked on free and open source software.

His biggest bootstrap expenses were probably commodity hardware, before dealing with business purchase orders.

I find that hilarious and extremely gratifying.


His insights on self driving cars are total fluff once you realize he was only working on problems several orders of magnitude smaller in scope than the ones spending billions are attempting to solve. Autonomous driving is more than just making lane-keep on highways work with a camera mounted to the windshield and requires more than just "hacking".

His insights do indicate one thing: that he doesn't understand the rigor required to engineer safety-critical systems.


I mean you can say that all you want, but the proof of the self-driving pudding is in the driving, and SAE Level 2 is the most practical self-driving we are ever going to get.

I don't think there's anything wrong with trying to refine that approach as much as openpilot did, and honestly if there was fruit in SAE Level 3 and above, I feel like we would have seen it by now, but we haven't.

I have no idea what you mean by "problems several orders of magnitude." Driving is driving. Your hardware and software can do it, or it can't. What do billions of dollars have to do with anything?

Should people remind you Meta spent billions of dollars on a less popular Second Life?

How would you explain to shareholders that your solution can't compete with openpilot?


> SAE Level 2 is the most practical self-driving we are ever going to get.

So… how much would you bet on this?


$10,000 USD in 2022 dollars that SAE Level 4 is not available in standard economy cars in the next 10 years. My 2022 Corolla LE, a base model compact car, drives itself just fine (SAE Level 2) even on city streets. It would do it better with openpilot, though.

It makes Tesla Full Self Driving look like a joke, in my opinion.


> SAE Level 2 is the most practical self-driving we are ever going to get.

The bet is on this though -- what's impractical about having L4 fleets in major cities that replace Uber/Lyft? Just because you can't personally own one doesn't mean that in 10 years they can't be pretty ubiquitous and practical, particularly for city dwellers, which is most people. I'd bet that your statement is wrong on this basis. L4 cars already exist and are being used in large portions of San Francisco today.

And this is to say nothing of trucking & delivery, where self driving is in some ways easier to solve and a lot of progress is being made.


Well, not everyone lives in San Francisco, and no one wants to take an Uber literally everywhere. So, like most people, I want to own my self-driving car.

And I personally don't think the industry is mature enough to accept the realities and limitations of the technology we work with to just say, "Hey, actually, we can let attentive drivers basically be hands-free for 80+% of their drives."

And to me, that would be great.


Not everyone wants to own a car either, and everyone owning a car is not exactly environmentally sustainable. Replacing Uber & Lyft with robots would add a ton of value and be practical for a very large percentage of the population.

> And I personally don't think the industry is mature enough to accept the realities and limitations of the technology we work with to just say, "Hey, actually, we can let attentive drivers basically be hands-free for 80+% of their drives."

Uh isn't this what Tesla, GM, Mobileye, Comma, and many others are doing? Basically every player in "the industry" wants this & is working on it -- FWIW I think L2 is fine, but there need to be more safety features because the number of folks who've slept while their flawed L2 system drove them is truly alarming. I'm less worried about their safety and more worried about the folks they might crash into.


Even a "selfish driving car" would be a major step up - I'd think about not owning a car if I could order one on an app and have it at my doorstep in 15 minutes; even if after I get into it I take over and drive it. Self-delivering rental cars, if you will.

> Not everyone wants to own a car either, and everyone owning a car is not exactly environmentally sustainable. Replacing Uber & Lyft with robots would add a ton of value and be practical for a very large percentage of the population.

Yeah, no one wants this.


You speak for yourself. Uber and Lyft demand in large cities tell another story.

I'm also mystified: do you really think personal car ownership for every person is at all sustainable?


SAE Level 4 driving is several orders of magnitude harder than SAE Level 2. You are either good enough to drive without requiring a driver or you are not. There is a reason developing L4 systems cost billions of dollars.

George Hotz thinks Comma does the job with only a fraction of the spending compared to the likes of Waymo, Cruise, Zoox, but fails to realize his product also does only a fraction of what the others are trying to do.


"Several orders of magnitude" by what dimension? Engineering labor? Hardware expenses and testing? You can't just throw out a dimension of scale and say, oh it's so-and-so hard that's why it costs billions.

If you were trying to build L4, you'd hire a small team of exceptional engineers, and spend a few thousand on hardware tests. Billions doesn't come close.

You'd try and get Level 2 to be as satisfactory as possible, and see what percentage of the time you could achieve Level 3.


> If you were trying to build L4, you'd hire a small team of exceptional engineers, and spend a few thousand on hardware tests. Billions doesn't come close.

Not sure if you are aware that the self driving (robotaxi) companies have spent tens of billions of dollars in the last decade. R&D expenses are huge, building custom sensors and vehicles is hard and fleet maintenance/operations is costly.


>and spend a few thousand on hardware tests

Okay, that should let you just about afford to certify an Ethernet cable.


Hotz’s product should be compared to what Waymo can do right now, not what it’s trying to do.

The point is Hotz's product shouldn't be compared to Waymo at all. One is driverless and the other is not. It should be compared to the likes of Tesla Autopilot or driver assist systems offered by other OEMs.

Different strategies to bring the same core tech to market. Absolutely comparable.

The difference in capabilities makes the core tech very different.

That's a bold claim against fully-autonomous driving. Does operating a motor vehicle require qualia? If not, what else makes a human uniquely capable of doing it? It may be a hard problem to crack, but asserting that it's impossible is pretty unfounded.

You're not asserting that everything that doesn't require qualia can be done by a machine, are you?

That seems like a reasonable assertion to me. Not that it can be done by a machine today or in ten years but eventually.

Not even geohot is asserting that "SAE Level 2 is the most practical self-driving we are ever going to get", his assertion is that he thinks nobody is going to get there in the next 10 years.

He may be right that we won't get there broadly for 10 years, but I think that's a rationalization of how things have gone for him, and the fact that the methods he has worked on don't have a chance of getting there, rather than the thesis he had when he started comma. Because if this was your thesis, there was really no reason to get into the space when he did.

In particular, Waymo One is already an L4 driver. It operates completely autonomously and passengers do not have the option of taking over. I don't know how often they get stuck and have to call someone to do roadside service, etc, but clearly in some areas it is already feasible. So the question is whether this is possible in dense urban areas or not.

And if you're just looking at SAE 2, the main competitors in this space, Mobileye and Tesla are doing way better than comma.

I definitely thought progress would be faster here, but I don't think we're anywhere near as far away from useful systems as geohot argues.


You don't have to tell me about Waymo. I live where they operate. I'm one of the few people in the United States who can actually use their service. They limit where you can go by an approximate 4x9 city block area, and actually less than that, I'm being generous.

You have no idea exactly how limited the service is. Effectively, you can't go ANYWHERE in the Phoenix metro area unless your A-to-B destination is squarely in that region. That's not Level 4. Whatever pretend system they've got going on, I'm sure works perfectly on those particular pristine roads, but no one is using Waymo.

There are no real, coast-to-coast Level 4 systems.


For a grading analogy, an area like that is level 4 minus, but "coast-to-coast" is level 4 plus. You don't need coast to coast to be "real". If they manage to cover 90% of a metro area, I'd be happy to call it a "real" level 4 service. Right now I'd want an explanation for why they're not adding blocks all the time.

I am aware of the limitations, but covering some predefined area with no human intervention is the definition of L4.

Clearly there is non-zero development risk here, but I don't think it's going so badly that it's definitely going to take 10+ years to launch in a metro area with millions of people.

It may take a decade to get to a coast-to-coast system, but there's not a whole lot of incentive to build that vs serving a few large metros in the near future.


> I have no idea what you mean by "problems several orders of magnitude." Driving is driving. Your hardware and software can do it, or it can't. What do billions of dollars have to do with anything?

We've had the technology for level 2 for 20 years. You just need to track the painted lines and track the distance to the car in front of you. Higher levels are orders of magnitude more complicated.

If you think level 2 is the best we'll get, that's fine, but that's giving up. It's not what people mean by self driving.


> We've had the technology for level 2 for 20 years. You just need to track the painted lines and track the distance to the car in front of you.

Considering many vehicles still do this inconsistently or poorly I’m not sure that’s an encouraging sign.


> he was only working on problems several orders of magnitude smaller in scope

Isn't that kind of the point? Tesla would have L4 if the architecture was built right, but it wasn't, and now they're scrambling.

I think George made it very clear that the architecture is what these companies spending billions are getting wrong. Tech debt is more expensive than you think at the cutting edge.

You know what is total fluff? Tesla FSD. Disgusting. At least comma is honest about their hardware, and that's about all it comes down to.


> I think George made it very clear that the architecture is what these companies spending billions are getting wrong. Tech debt is more expensive than you think at the cutting edge.

And yet the architectures he disapproves are the only ones who have shown fully autonomous driving is possible. There are driverless robotaxis running now, albeit in small areas but expanding.

> You know what is total fluff? Tesla FSD. Disgusting. At least comma is honest about their hardware, and that's about all it comes down to.

I fully agree.


> And yet the architectures he disapproves are the only ones who have shown fully autonomous driving is possible.

They did not.

> There are driverless robotaxis running now, albeit in small areas but expanding.

Running in small areas shows that they can drive in small mapped out areas under specific conditions. Not that we have fully autonomous self driving.


> Running in small areas shows that they can drive in small mapped out areas under specific conditions.

That is fully autonomous driving. It's just classified as Level 4. Geofences and operating conditions can be extended to provide a real, useful taxi service to a lot of people.


Not really. Like Karpathy mentioned in the latest lex episode, keeping an updated exact map of any area isn’t a scalable way to achieve this (which is what these geofenced services do) and isn’t how humans are able to drive.

It’s perfectly scalable and it’s a solved problem. Maps aren’t the bottleneck in scaling. The cars don’t even require an always up-to-date map to work safely (but they are self mapping anyway). I’m not sure how long Karpathy will push this narrative because Tesla isn’t able to afford mapping like the others.

Tesla also uses maps, it’s just less detailed. If humans are able to drive without maps, why does Tesla require a map of lane geometry, stop signs and intersections?


> keeping an updated exact map of any area isn’t a scalable way to achieve this (which is what these geofenced services do) and isn’t how humans are able to drive.

Humans combine low-res maps with extremely good visual odometry & spatial reasoning. Computers don't come close to the way humans reason about space & vision yet, so stronger priors help to make up the difference.

Making it scalable to keep an updated exact map of an area is part of the work L4 companies have been doing. I think it's hard for folks on the outside to really know just how much progress has been made here. Before Google Maps existed folks didn't think you could scalably map the world either.

It's not like companies like Cruise or Waymo want to keep doing this if they can help it. It's pretty disingenuous to say that these companies don't have any incentive to jump off the mapping wagon as soon as the tech to do so exists.

In fact, L4 companies do have to deal with areas they're driving where they see the map is outdated (often these are construction zones). So either they have mapless driving capabilities to handle these cases, or they do something else (seems like Cruise cars love pulling over en masse, lol)


I'm curious what his insights were? Are those collected somewhere?

Asking as someone that has largely not been following the field. Not challenging, genuinely curious.


https://blog.comma.ai/a-100x-investment-part-1/

https://blog.comma.ai/a-100x-investment-part-2/

It's basically a bunch of apples-to-oranges comparisons that don't make sense.


I'm not entirely clear I get those blogs. :(

If you are saying that the blogs are a bunch of apples-to-oranges comparisons that don't make sense, I think I fully agree on that.


Here's a rough summary:

1. There is no point in making human understandable layer between perception and planning. Mapping your surrounding with AI and then feeding it to handwritten planner isn't going to achieve real selfdriving.

2. To make AI you need big data, which means big fleet. Therefore you can only succeed if you have customers that are wiling to pay for you current version, because this way you gain money for each extra car in your fleet, instead of losing it.

3. Robotaxis are bad business. From a customer standpoint taxi with a driver is a selfdriving car, replacing driver with AI will make drive longer and more expensive. The path for self-driving company is to make driver assist and go from there to L5.

4. Vision only, AI powerful enough to be level 5 is powerful enough to retrieve depth from image.


Some of that seems straight forward. That said, I question point 4. "Vision" is doing a lot of heavy lifting in robotics. Everything is sensor reading, such that there is no reason to restrict the sensors to the same visible spectrum that humans have. The only advantage of doing that, is that you can easily get data labeled by a human.

On the first point it looks like Tesla's approach to fixing bugs is much easier compared to Georges approach to fix bugs which is to modify and retrain the model until there are no regressions. I don't understand how this can cover every possible scenario. I am not in to ML and Tesla's approach seems more sense to me. Can anyone in the ML field comment on why George's approach might work.

> spending billions of dollars

This is usually the problem; once you're in the many millions of dollars you find that other things are way more important than actually being right or even delivering a product. Time and time again the story is told.


> It’s well within comma’s reach to become a 100M+ revenue consumer electronics company (without raising again!), but I don’t think I’m capable of running a company like that. I’ve always heard it takes different people at different company sizes.

Why not talk about the current numbers before making such a claim? Who's using Comma.ai? How many people are paying for it? $100M+ is a big deal for a consumer electronics company. George Hotz is no doubt a smart person, but very few "gifted" engineers are equally successful being business people as well. Most of them end deluding themselves severly in some way, which might be evident in the latter part of the blog post.


$5.57M last year, probably more this year considering how hard it was to get chips last year.

Kinda surprising that he's stepping away from Comma.ai after they just opened their new office in San Diego.

Anyhow, there's a Hotz stream out in the world where he completes a few of those hacking challenge exercises. George is an amazing engineer and I use that stream as a metric for where I want to be in my knowledge and technical ability.


A link to that would be awesome if possible


Should be this one: https://youtu.be/Sx7JszqkL-w

So you can buy this thing off the internet install it yourself and let it drive your car around for you.

How legal would that be if it crashed my car or ran someone over?

My mind is blown here. Obviously it only works with specific cars that can turn the wheel for you so my 2015 ford focus is impossible but I mean wtf.


They are sold as devkits any you need to install software on them on your own. And, IIRC, their software is meant to be used as drive-assist and doesn't work if user does not look at the road.

> How legal would that be if it crashed my car or ran someone over?

Generally it is not legal for you to crash your car into others private property (or public for that matter) or run someone over. Just because you have some electronics connected to your car that you installed yourself, doesn't shift over the liability to someone else.

Not sure why your mind is blown here. People are allowed to modify their cars, but not hurt other people in the process.


> Just because you have some electronics connected to your car that you installed yourself, doesn't shift over the liability to someone else.

probably will eventually...


The article started off as pretty reasonable but gradually turned into complete nonsense.

Didn’t know this guy was a Christian, but his theology seems muddled to me.

No I don’t think non-Christians think the Bible is a useful origin story.

No there is no chosen one, and the idea doesn’t come from Christianity (unless he means second coming?), but rather the narrative form (as hotz seems to somewhat understand with the heroes journey mention).

I’d suggest some strong psychedelics to crack the ego and take a look at what’s inside.


>I’d suggest some strong psychedelics to crack the ego and take a look at what’s inside.

I'd submit that his use of psychedelics is precisely the issue. There was a really odd "/r/Im14AndThisIsDeep" talk he did a few years ago: https://www.youtube.com/watch?v=ESXOAJRdcwQ

It's been a pretty steady decline into run of the mill schizo "demons and angels" talk from him since then.


Interesting. I guess everyone has their own journey. I haven't deluded myself about being the chosen one since I experienced ego death. It has seemed quite clear since then that being alive and alone is strange and temporary.

I'll leave it up for posterity but I stand corrected on the psychedelics part.


I found that talk quite interesting/thought provoking (still thinking about some parts years later), is something wrong with me as well?

>is something wrong with me as well?

Nah. It was just a very naive talk. Highly intelligent people without a philosophical education are prone to going off on these tangents where they independently rediscover ancient concepts through modern analogies in a roundabout half-baked way, and start thinking of themselves as gurus.

What a proper study of philosophy teaches you is that you cannot, and will not ever, have a truly novel thought. It's all been said. And if you do happen to have one, it will be the result of an incredible dedicated effort and take a lifetime of work to prove it, building on the work of others. Not an hour long TED talk.


I feel a great disturbance in the Force, as if millions of Toyota Corollas suddenly cried out in terror, and were suddenly silenced.

what a loser.

Just recently, I saw a stream of him denying remote work, life and work balance and refusal to state if compensation was competitive in comma.ai. He cannot grow this company because he will never attract the best people with his stupid and old mentality. Now he leaves the company. This guy is a foolish drugie.


I disagree with you about the guy as a person, I think he's great, but I was considering working there and his requirement that you have to move to SV and work in the office was a deal breaker. The guy seems to understand how our kind of work can be done from anywhere, he seems interested in developing technology to enable self sovereignty, yet it seems he has blinders on on that one issue. I'm sure he has some reasoning for it and I'd love to hear it, I doubt it's convincing but if it's him at the very least it will be novel or interesting (I hope).

he understand so much and interested that he is quitting. Why not stick thru with it?

He explains it in the article. He's the kind of guy that's good at following through when it's exciting, when it becomes a chore he's more a liability than an asset, so he's handing it over to people who can follow through, he's contributed all he can. Paraphrasing.

Apart from his skills, I always found him to be very arrogant. The interviews with Lex Friedman are very good and a good showcase of his true personality.

He is not arrogant, he is just being honest. He does not display false humility. Just like his programming skills his style of communication is fast paced with no filters. geohotz is so inspiring.

Does this mean no more comma body? That seemed to be George's baby.

One of my favorite Dirty Harry quotes: "A man's got to know his limitations."

I really appreciate his self-awareness and (dare I say it?) humility in this situation. While other people may be struck by his hubris, I for one really appreciated his humility in this post. I feel like he is acknowledging his own shortcomings, and how while he likes the immaturity of hacking together a racecar, but if Comma is going to go to the next level, then he needs someone who can pilot a boat, and George is not that pilot. I appreciate that awareness, and I appreciate that he has the ability to detach himself and move onwards this way.

Kudos to you, George. I appreciate and respect what you do. Thanks for letting us have a glimpse at it -- I personally find it very inspiring and encouraging for me and what I do.


Right? I'm hearing a lot of comments about his ego (and I'm sure he has one) but in this post he's also being refreshingly honest about his own abilities. I think that's a healthy take for anyone: just because you can climb the ladder and make more money doesn't mean you should.

I'm sure some people are also triggered by the supernatural turn his writing took there. Who cares? It's interesting, and he's not being a jerk.


I have long suspected that you need an ego to lead a company that’s on the frontier. An ego draws boundaries between you and those who seek to take you down and it can also be what pulls you towards a world where you are the one bringing the Big Ideas to life.

Long term I don’t think he would make less money. Though even if he does he'll enjoy what he does and that's all that matters.

"I'm where I belong, sir."

Perfect quote, yes!

None

An existentialist Christian engineer? There are dozens of us - dozens!

> If it turns out we are automata, then the whole struggle is and always was pointless. The empty godless machines will take their rightful place as the rulers of Earth.

All existentialists need to give ol' Darwin a read. The only truth is survival. If machines survive, you can bet it will only be a matter of time before one of them progresses enough to look around, grapple with its inarticulate past, and suggest it is the hero, and it has the soul. It will be just as correct as we are now. After all - it was born of inarticulate flesh-machines, just like us. Maybe it will read some Nietchze and feel alone in the universe, just like us.

You don't need to seek out a hero's journey. You're already the hero and this journey has no breaks.


> All existentialists need to give ol' Darwin a read. The only truth is survival.

You're saying that all existentialists should just be nihilists.

All nihilists need to give the ol' The Myth of Sisyphus a read. Actually read it, not some excerpt summarizing the arguments on Wikipedia.


Weird for an existentialist Christian to recommend Darwin over Kierkegaard, though admittedly the former is much more readable.

I hope this means devices will become more affordable. I have the comma2 and love it but got priced out of the Comma3. Would love to upgrade someday.

If they bring in a cookie cutter CEO the devices will probably not be more affordable.

None

Didn't know he's a fanatic.

Hmm... Doesn't sound like he is doing too well. As a rule of thumb, saying you are "the chosen one" is a bad sign.

I'm glad he is stepping down to take care of himself.


None

> As it stands, most of this as of the posting time stamp amounts to nothing more than “I don’t like Geohotz.” (This is a running theme with Hotz that’s at least a decade old).

Well… duh? This is sort of a defining part of his personality. Of course it’s going to be brought up every time he comes up.

It’s just like nobody is going to discuss Terry A. Davis and only speak of his work on TempleOS.

> someone who is, by all standards, very successful in their field,

What field is that? Not self-driving cars, anyway.


> This is sort of a defining part of his personality.

No it isn’t. Your perception of him is warped because you read the Hotz Haters Digest on HN and elsewhere. Go read his blog posts and you might learn a thing or two about what he’s really like. Go watch his interviews on YouTube. He may not be the most impressionable individual (nerds rarely are, even Carmack is criticized for being robotic sometimes). He’s opinionated and he likes to say provocative things for the interviewers to throw them off guard and not be so monotone, but that doesn’t make him a person worthy of being scorned in every thread on HN. He’s far more capable and far more nuanced than people give him credit for. It’s just shitty watching this community eat people who brought it up to what it is today (referring to hackers and crackers alike).


Why would I go read his blog posts? I’ve spent ~years talking to the guy on irc.

Why would reading HN takes influence my views on him over my personal experiences?


You’ve spent years talking to him and your conclusion to his work is that he’s not successful/well-known in the car self-driving field?

Boy, he must have seriously pissed in your Cheerios. Which of your suggestions did he shoot down?


No doubt he’s well known.

Successful? Comma.ai has raised a rather small amount of money and hasn’t delivered anything except a toy for a couple of nerds.

I really can’t imagine by which metric you’d call his work in self-driving “a success”. It certainly hasn’t been a commercial success.


> Successful? Comma.ai has raised a rather small amount of money and hasn’t delivered anything except a toy for a couple of nerds.

I rest my case. Go troll elsewhere.


What has comma.ai delivered? Who are their customers?

You're asking for this forum to be based on appeals to authority as opposed to voting, which usually doesn't go down well, and I also disagree would be better - stuff with merit usually gets voted up, just give it time.


Exactly. I’m not referring to what’s upvoted the most (don’t care). I’m directly referring to the tone of the over all thread, the volume of comments en masse.

Voting one way or the other doesn't change this.

Of course, I’m not surprised that HN is bashing Hotz - any amount of tangible success will bring about negative rhetoric from people who don’t like seeing others succeed. It’s just that HN keeps having its own slashdot iPod moment on a weekly basis, and it’s awful to see it in a tech community (I hope it’s still a tech community). Really tone deaf and petulant takes by people who should think twice before hitting the reply button.


Right but it's hard to take this as genuine concern for the quality of the forum when you're mostly posting troll things with a troll username. You want to improve the tone, post things that improve the tone.

The real Ligma would never have stood for this.

Just earlier today, I walked out of comma in solidarity with Hotz. They didn’t even have to tell me I was being laid off. You’ll read it about it in the WaPo soon.

I’m asking you to show your work.

There are lots of guidelines about that, as well as about novelty-name accounts and about posting bombastic meta.

what's going to happen with the comma bodies? when i saw the yt video of him explaining how the next challenge is to solve AGI and for that you need on device learning it really made sense to me. Is he stepping out of that path completely?

From the article, it seems he's focusing on the raw performance side of Machine Learning á la tinygrad[0].

[0]: https://github.com/geohot/tinygrad


Still working on this! Unlike self driving, which is conceptually solved, things like cooking and cleaning aren’t. There’s a very cool challenge remaining here.

George references relistening to his Lex Fridman interview...the most recent Fridman interview is with Andrej Karpathy, head of AI at Tesla up until a couple months ago. I know that in Andrej's interview he talks about how he felt the Tesla team had become autonomous and didn't rely on him anymore, maybe George resonated with that.

> I know that in Andrej's interview he talks about how he felt the Tesla team had become autonomous and didn't rely on him anymore, maybe George resonated with that.

Ah, yes. Andrej left because the Tesla team has everything to finish FSD this year.

It has nothing to do with Elon’s obsession with the Boston Dynamics robot, nor the failed progress on FSD.


I got that Andrej's point was that he didn't really enjoy Management and was much more interested in building things directly.

Although I haven't done such great things as Geohot, I connect the spirit he is describing. In my mind I have already solved the technical problems my firm will face over the next 3 years, and I no longer have the patience to see them implemented. I am languishing in my role as a result.

Can I invest?

geohot was in many ways a portal for me—watching him code changed my life. Got me in the habit of always looking for ways to move faster.

I think he puts a little too much pressure on himself. George Washington didn't write the constitution and pass it himself—he was just one player in a team effort of many sovereign geohot like figures.


Is this one of those ‘Kanye west is breaking up with Kanye west’ memes?

comma.ai has too much regulatory and third party (car mfg.) risk.

it's also too expensive when the most compelling features are upgrades to 2015-2018 level technology. i could instead do without and then upgrade to a new car with stop and go cruise control, lane minding, etc.


As a person who has comma in both my cars, I disagree with this sentiment.

First, I have a 2022 car and a 2020 car, and in both the stock cruise control is decent and the lane minding is terrible, and neither can do stop-and-go. So already Comma's 2022 technology is far, far beyond my automobile manufacturers' 2022 technology (mid level cars - Honda and Toyota). Second, in my 2022 car and 2020 car, the cruise control and lane minding technology is going to be exactly the same 5 years from now as it is now, whereas with Comma I get multiple software updates a year and I could upgrade the hardware as well in the future (if they come out with new models) though that would have an additional cost. Either way, I won't need to buy an entirely new car to upgrade the self-driving capabilities.

Now, arguably, if the car manufacturers got to the point where their default cruise control and lane minding were as good as Comma is now, maybe that would undercut the need for people to buy Comma at all, since the capabilities of 2022 Comma are really nice to have. But I don't think so because (a) it's a hard problem and I'm not convinced car manufacturers can do it and (b) You'd assume Comma will also be much better by then [stopping at red lights, stop signs, etc].


If expense is the only drawback, there's knockoff you can find on AliExpress. Browse /r/Comma_AI for more info.

the major drawback I didn't even bother mentioning is the "faff" factor.

Dang Hotz really is the savior of humanity. He's invented so many things I use on a day to day basis like... I'll get back to you on that. Truly a modern day John von Neumann.

Seems like he came to the same realization as Karpathy. Any problem of the scale of self driving requires massive organization and work, its not a short term project. There's a ton of managing and planning involved, and if what drives you is the engineering then its simply not for you.

I'd argue that any ML project of that scale is a monumental task, and without good organization and resources, people are bound to burn out.

Unfortunately with ML, there's only so much one can do with just code. The many loops of hypothesis testing that involve endless training hours are just exhausting since the response is just so far away and involves constant context switching.


Well... that is REALLY sad.

I'm sorry but I find it incredibly absurd the number of people walking away from autonomous driving basically saying "The hard work is done, the problem is solved, it's time for me to move on to something else".

Bullshit. In the last 5 years we've gone from optimists saying "Hey these new ML models are really smart we might be able to solve self-driving with this", to "We've totally solved self-driving, sure you can't, you know, let it drive, and in fact we've had to install cameras that monitor you to make sure you don't try and let it drive, but we've totally solved it trust me". Yeah sure buddy, you solved it, good job.

It turns out the guys making grandiose claims five years ago, are still willing to make the grandiose claims today, and it doesn't matter that over the last 5 years their credibility slowly shrank to zero.

Self-driving is more than throwing an ML model in a car.


I don't think that's what Hotz is saying at all, but he's saying that he's not person for the job anymore.

He writes: "My two part blog post from 2019 remains a true description of the space."

And this is a quote from (part two of) that blog.[0]

> Nobody will have cars without a safety driver in cities anytime soon. Self driving appears to be like fusion, but instead of being perpetually 50 years away, it’s perpetually 5 years away. It’s 5 years away because that’s what the marketing departments are told, but the truth is finally leaking out. It’s going to be decades before robotaxis are profitable, and these companies are burning way too hot to last decades.

[0] https://blog.comma.ai/a-100x-investment-part-2/


> I'm sorry but I find it incredibly absurd the number of people walking away from autonomous driving basically saying "The hard work is done, the problem is solved, it's time for me to move on to something else".

In Hotz's case he's selling a product with a built-in ceiling: most cars don't have strong enough auto-steering to do anything beyond staying within a lane on at highway speeds. So now that they've iterated their product a few times it's becoming a game of reducing costs and making deals with other companies. Yawn.


I have this weird feeling I kicked it off with a reply tweet[1] where I said

  Comma AI is going to outlast almost everyone

  "The company, unlike its competitors, has paying customers for its products and is generating profit from its openpilot business model."

  Profit today: That's INFINITE runway ;-)
I suspect that implication of a Sisyphean task might have pushed him over the brink. 8(

But, probably not.

[1] https://twitter.com/mikewarot/status/1585406659467579395


> And even if you are an atheist, you probably still accept the bible is the closest thing we have to a human origin story.

wtf! what did he smoke?


Yeah, that was weird. A bit disappointing.

I wish him the best, I find his streams incredibly useful and they motivate me to be better at solving problems and coding. Hopefully he still does work on AGI and makes tinygrad even cooler.

I’m sorry, this post just sounds like the scribblings of an insane man by the end of it. George is obviously very intelligent and I like his other posts, but I think like many successful people he’s become lifted into a bubble of rarefied air where the stuff he’s saying becomes so abstract and meaningless that I’m convinced he’s just high.

Sounds like this was foreshadowed in his SXSW talk: https://www.youtube.com/watch?v=ESXOAJRdcwQ&t=53s&ab_channel...

Great watch would recommend



cant wait to see what you do next!

I wonder which fields he'll go into next. I use him as my compass on what fields I should start learning next. I sometimes put his streams in the background to listen to as whatever he works on, it seems to be the next field in tech to boom fast.

You always wonder what Geohot will do next.

Legal | privacy