Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I've seen a lot of comments on here like "well LLMs are good at writing snippets, but could never write or maintain large codebases to accomplish a larger goal" - but are you SURE about that 'never' given the current generation is already doing things people suspected they could never do? "But they'll never be able to manage / interpret stakeholders", Are you SURE that LLMs will have to adapt to fit stakeholders, and not the other way round?

That sounds a lot like the self driving cheerleaders five or ten years ago. That work so far has resulted in some awesome features like adaptive cruise control and parking assist but it fell far short of what the hype was promising to deliver by now.

Five or ten years later Mercedes is the only company getting ready to ship level three self driving. Level four and five are still a pipe dream, practically restricted to a few companies like Waymo in a few controlled environments like Phoenix and San Francisco.

GPT4 is great and I can't wait to see what 32K or even 100K/1M token models can do, but I fear we're about to hit the point where progress grinds to a halt because going further requires something closer to AGI than what we have now.



sort by: page size:

The same argument can be used for Tesla full self driving: basically it has to be (nearly) perfect, and after years of development, it's not there yet. What's different about LLMs?

I agree this appears to be Musk's opinion of LLMs in particular.

However, as Musk has already got AI in his cars and was interested in the topic well before LLMs (founding investor in OpenAI when they were doing reinforcement learning), I'd be extremely disappointed if he had forgotten all of that in the current LLM-gold-rush.

(That's not a "no"; he's disappointed before).


> because we have the combination of mathematics, underlying technology, and economic incentive to improve the models

Still we'll have to see how much the new models improve. Tesla also has all those incentives to get to actual full self-driving... yet they seem to have reached a plateau years ago.


I have [said this before](https://news.ycombinator.com/item?id=19446043 ); Tesla is not on track to deliver Level 5 with their current hardware and apparent software strategy. Piloting a car is a very hard problem, but I would say that Tesla is still struggling with the easy parts of it— they should have solved the "never hit any obstacle" problem to superhuman levels; I think that's pretty comfortably solvable with present technology and algorithms. Instead a Tesla feels about as good in that respect as a careful student driver.

The harder parts of machine-piloting involve interpreting the world (lane-finding in the absence of striping; distinguishing hard obstacles from ephemeral objects like paper, leaves, weather, reflections, and so on); and participating in a social environment (what driving postures indicate what intents? What do the local humans consider rude or polite? What are they expecting?)

(A truly advanced autopilot would seem almost magical to a human— it would be able to back-solve diffuse and specular reflections in the environment, and accurately deduce the presence of oncoming cars around corners that would be totally invisible to a human driver. It could predict accidents several seconds before they begin to play out, and the sudden braking/maneuvering in the absence of any visible threat would seem strange and confusing to the humans inside).

Tesla is nowhere near cracking these things; they're still on the ground floor. I still predict that current pre-buyers of so-called "full self-driving" are not going to get what they were promised, and going back on that will probably be bad both for Tesla's reputation and their wallet. If I had to guess, I'd say that if Tesla delivers on Level 5, it will be with a hardware platform that doesn't exist yet, and a software platform that is close to "complete re-write".

As it stands, their misleading marketing on self-driving capability is both dangerous and borderline fraudulent.

All that being said, I'm long on self-driving as a technology; just short on Tesla's current approach to it. It's a shame because I suspect both Waymo and Cruise are doing significantly better technologically, but I trust in their ability to execute a product significantly less. I'm not sure who will actually crack it.


> They'll be making huge software-style margins on FSD in 2024 and beyond.

I think other companies might come close. They might not have Tesla style "FSD", but coming close in terms of highway only, slow speed only, might be enough to make Tesla a "car company".

Mercedes has already started its approval for L3 self-driving in the US, where they would assume liability. A lot of other cars have a subset of "Autopilot + FSD" features like: adaptive cruise control, auto-park, summon, better lane assist*. Sure, these are not the same as Tesla FSD, but in a few years other manufacturers will come close enough.

One big factor in why Tesla is/seems ahead is because they are using regular people to "test" their "FSD", when other companies are being more cautious.

Tesla could showcase their "FSD" by having "full self-driving" teslas in certain cities, and applying for licenses, but they haven't.


Teslas (while much maligned and deservedly so) can already drive pretty well on most well painted and high traffic roads. Granted it is very conservative and often I take control just to avoid other drivers honking at me for taking too long to turn.

But this article mentions L4 autonomy. L4 still allows for geofencing and human control. Tesla is already pretty much L3, borderline L4. I think it is realistic to say L5 won't exist until radical change to our infrastructure is made but to say L4 won't exist in our lifetimes... I mean, I suppose if you're 80 years old already.

I also think it draws too radical a conclusion in the article. The conclusion is from the author. The people from Ford explicitly said they think that it will be developed independently and Ford will be able to buy it instead of having to make it themselves. Which is very different than "it won't happen in our lifetime." It seems they didn't decide to stop research because it is not possible but because oters can do it cheaper.


I worked in Automotive for 7 years until late last year, though never directly on Autonomous Driving.

It is my belief (opinions are solely my own) that level 4 driving is 10 years away at best, and true level 5 will never ever happen with any manufacturer. I think Tesla has done well in the space of demonstrating self-driving, but Musk’s continued promises of full self-driving have gone from eyebrow-raising to eye-rolling.

I just sincerely doubt the magical future we’re hoping for will arrive.


> "Of course we're committed to automated driving," the exec told him. "The numbers don't pencil out any other way."

If I was doing this strategizing at Lyft - "technologies like models advance at tremendous pace, invest just enough so we are not severly behind competition if something trans-formative comes up". In X years you will be able to build your own self driving from open source components, the question is whether X is 5 or 25 years.


I've been seeing such comments for 4 years or more. Mercedes and Waymo actually have full self driving on the roads today, with regulatory approval and proven efficiency. Tesla has a nothing burger that disengage if you take your hands off the wheel.

>I think it's way further off than that, like, 10+, and almost certainly not without extra hardware being added to Teslas

Doubting like this is a pretty common view outside of Tesla, and we've heard it repeated many times here. They clearly disagree.

>Full self-driving is 90% edge cases. Adaptive cruise control is not a few incremental steps away from full self-driving

This they would completely agree with, as you can see from watching their recent autonomy day presentations. And they have an overwhelming lead in collecting edge case data, so... they're in good shape with that.


Folks understand L1 and L5. The levels between are such a blur and mix-match of things that I don't think anything is even accomplished having these levels. I agree with you, from what I have seen Tesla is far ahead and rapidly progressing using the right approach of training NNs while having a human behind the wheel.

I believe that Tesla is ahead in the self-driving industry from an overall technology perspective. What they have failed to do, however, is to tackle the problem of when and how much the user can actually trust their system. This leaves users to struggle to build their own mental models of the same ("Oh, it likes this road", "I don't trust it at high speeds", etc.) This is terrible UX.

Mercedes seems to be stepping up to the plate to actually take a stab at this problem. Inevitably the answers to these questions are not super impressive compared to Tesla's 'hey, it might work in any situation, you never know' approach. But I suspect users will really like it, even though the underlying tech is likely far behind Tesla's.


A lot of comments here seem to be extremely pessimistic of self driving (not just Tesla, but in general), claiming that it's "decades away".

As someone who has been following fairly closely the advances in self driving, I think most people are underestimating the current state and rate of improvement of these systems.

For Tesla, it's currently "L2+". Here's an example[1] of how good or bad it currently is, in a fairly easy environment. Just a year ago it was an absolute mess, even though it still has a long way to go, the improvement rate is really good. In places like Manhattan it's a mess, but I don't think FSD needs to be able to handle Manhattan to be useful, because not everybody lives in Manhattan.

For Waymo, it's currently L4 (operating without driver), however available only in very select locations. Here's an example[2]

Regarding vision vs lidar, anybody in the field knows that both approaches are viable for self driving[3], it's just a matter of which can get you there sooner (and LIDAR is basically agreed upon to be an easier but more expensive approach).

I used to think Tesla FSD was doomed to fail, computer vision good enough for self driving was more than a decade away, but I've changed my mind since. I now think vision based is viable, and really important because it will enable self driving for mainstream cars (not just expensive robotaxis)

From what I've seen, Tesla's FSD perception is quite good already, and the majority of the times FSD messes up it's because of it's planner, not because of perception. And that's considering it's cameras and computers are quite old already, when they decide to upgrade them it will lead to a pretty large improvement to perception.

My current estimate is that Tesla will achieve L3 in 2 years, and L4 in 4 years. It will require a HW upgrade (cameras and computer).

[1] https://www.youtube.com/watch?v=tRqW9LJZaWY

[2] https://www.youtube.com/watch?v=L6mmjqJeDw0

[3] https://www.youtube.com/watch?v=gbyY2AQ_hdc

Edit: Instead of just downvoting, I'd appreciate if you left a comment pointing out what you disagree with. I'm very interested in this technology and other people's thoughts.


>If self-driving is fully developed, people would buy fewer cars, and that's bad for Musk.

Maybe on the long run but it looks like Tesla might have self-driving cars about 12-24 months before anyone else, that's lucrative. I think a lot of the interest in Tesla cars (esp. those with the v.2 autopilot hardware) comes from the full self driving prospect. The economics of full self driving could be awesome for Tesla - they can operate a taxi network wherein sell the cars and the service.

And Tesla earns ~$8k-$10k from people who upgrade their car to full self driving - that's probably the highest gross margin option in the automotive industry because all the new cars have the hardware built into them. There's one fixed cost: the software. If they build the software right, upon regulatory approval, they can cut down the team to a skeleton crew that just makes sure the system is self-improving.


The optimism in that article is astounding, and that's coming from a dyed-in-the-wool optimist. I think this is a classic case of a layman underestimating just how hard the technical problem of building a 100% autonomous car is. Tesla has some autonomous features, but the car still requires a human driver. Getting that last 5% of the problem solved for all the little, rare, unpredictable things that can happen in the real world is a really hard problem.

I agree that shared self-driving cars are the future, and at some point sooner rather than later they will likely be electric self-driving cars. But as to which company is going to dominate that space is not easy to predict, and a company like Tesla is the underdog in a ring with heavyweights like Google and Apple.


>Krafcik went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions. “Autonomy will always have some constraints,” he added.

This is an out-of-context quote. Right before that, the linked CNET article said "John Krafcik, head of the self-driving car unit of Google parent company Alphabet, said that though driverless cars are "truly here," they aren't ubiquitous yet. ".

There is a big jump between L4 & L5 autonomy which is what Krafcik was discussing. But even achieving L4 (which is what Waymo has in its Phoenix tests) at mass scale will revolutionize the industry since it means cheap robocabs for large portions of the US for much of the year.

The other companies are still struggling with L2 & L3, but Waymo's lead appears to be getting larger based on the public results they are showing. Tesla is the only one that's close since it has a public L2 system.


And Tesla will just ingest large amounts of data from their fleet and magically dump an L5 solution one day? That's believable?

Elon Musk has been promising imminent L5 self driving every year for the past 7 years; that requires more than incremental improvement. The ones actually doing incremental improvements are companies like Cruise and Waymo, making it work one geography at a time.


Big Auto Exec... "Hey we are years behind on self driving tech, what are we going to do about it? How can we catch up? We're going to end up having to license software from Tesla/Google?!"

Other Big Auto Exec... "Oh don't worry we'll just use our political influence to change the rules to dramatically simplify the problem."


This is pretty common in ML projects, and a big reason why there aren't many major companies whose core product is based in a complex ML algorithm that isn't fully baked by the academic community first.

In theory, if the approach to self-driving that Tesla is pursuing in any given year actually worked... then the release would be about two years away. In reality it hasn't been working well enough, and every year a new plan is drawn up to reach full autonomy in 2 years.

This is also coincidentally slightly longer than the average tenure for an engineer/scientist, and as such the champions of a given strategy/approach will have departed the company before someone observes the strategy not panning out.

next

Legal | privacy