Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I don't have a source for the detailed cost breakdown, but I can imagine the majority of the cost is labor and digging up/filling the road since the sensor itself should be less than a thousand.

Those are installed with a massive handtruck saw while cutting off a lane of traffic all day. It takes a half dozen people working on-and-off, doing different jobs. It takes jackhammers, high pressure water from a tanker truck, and people to place and wire the sensor to power. It's way more complicated than anything you'd do at scale, because it's infrequent and the goal is to do it thoroughly with non-specialists.

If you were installing millions of tags you'd have a dry drill that could go off the back of a truck and place+fill a tag in ten minutes. If you had a line you'd have a it hanging off the back of a truck and place it continuously, like a street cleaner or edge clearer. For tags there's no reason they'd need more than one person to place and no reason to even put them in the road when they could just go on the edges. Triangulate with directional antennas or something.

That said, I think it's pretty obvious that locating the roads is by far the easiest problem for self driving. If you wanted to make a serious attempt you'd want every car to broadcast a short range location, and to share data over a mesh network. "Knowing where the road is" to precision RFID would give you has been solved for over a decade with GPS and digital maps.

Much more pressing issues are non-obvious sensing like hearing a car around a blind corner or knowing when to be cautious about moving. Knowing when something is coming onto the road or when a vehicle is having a problem. Inter-vehicle communication is just so obviously important to that... it's really frustrating how vaguely it gets talked about. I don't give a shit about teslas coordinating braking so they can form a tailgate train for efficiency, I want cars of all kinds to be warning each other about what they intend to do. I worry that legislation or at least a regulatory body will be the only way to even get people talking about it seriously.

Other than that, cameras watching for intrusion into a road would be easier than solving it from vehicles. It seems patently ridiculous putting cameras to watch every 50' section of road. 1080p+ cameras, simple detection, and mesh wifi can be built in a $30 package... but there are >2.5 million miles of paved roads in the US. 30$ per 50' would cost, bare minimum (and ignoring electricity requirements+labor+the pole to put the cameras on) 8 billion dollars.



sort by: page size:

> Most people don't realize a single sensor can cost $20,000 - now imagine you have 10 of them.

True, but part of that cost is an economies of scale issue, and we are finally seeing lasers like the velodyne come down in price. The 64 laser version used in the Urban Challenge cost over $75,000. Today, you can get a smaller version for $7k. Still a lot, but the price is coming down and it's coming down fast. Even putting one of these guys on board would be a huge improvement of sensing capabilities over what they have.

Remember, people are paying $5500 for the autopilot alone and as much as $130k for the whole car. If anyone would be willing to pay a premium for some better sensors, it's Tesla's customers.


> This is especially true in heavy rains, snow and any driving condition that doesn't follow the nicest of layouts like the Bay Area.

CommaAI also believes lidar are a bad idea.

Tesla has cars using their technology everyday all over the world and thus have 10000x more insight into how conditions effects their hardware/software.

While Waymo has spend 10 years testing in Arizona and basically never once had to contend with these problems.

And lidar does have its issues in certain conditions as well, its not some magic technology.

> They seem to be going to Waymo, Uber, Apple and Nvidia instead.

Very hard to judge that. I would like to see the numbers on that. The main evidence to me is that SpaceX/Tesla always rank highest on 'Where you want to work' for engineering students.

Also, it seems to me that Tesla is focusing very having a basic framework that non-super AI experts can improve the system. That is the only way you can even begin to manage the complexity of the global road network something people like Waymo has defiantly not yet even begin to solve.

> 3D self-driving labs are all building on each other's work, as the cost of GPUs and LIDAR sensors has dropped by orders of magnitude over the last decade.

And Tesla makes its own hardware that dropped even more compared to what everybody else uses.

> Cameras and sensors on the other hand remain more or less where they were 10 years ago, and Tesla is left alone to compete with everyone else.

Lidar by itself does nothing, you still need cameras and sensors as well, so it is purely added cost. There is simply no denying that high quality cheap lidar that can be deployed on millions of cars doesn't exist yet.

I would not be against Tesla using lidar. My more important point is that I think HD mapping is a horrible idea, lidar or not. And as long as that is the way you approach the problem you will never have a general self driving solution.


> The problem that needs to be solved is vision.

"Solving vision" is a 10x more harder problem than building a practical self-driving car with LIDAR. We might get it working with LIDAR in 2-5 years, but without LIDAR in 20-50 years.

Arguably Alphabet (Google + Waymo + Deepmind + co) is the largest group of AI and ML experts in the world. If someone is going to figure out general computer vision, then it's likely to be them - and if they are working on LIDAR first then there must be a good reason why.

I believe Tesla is promoting LIDAR-less systems just because it's expensive bulky hardware, and they can't possibly sell cars with it at a reasonable cost. Calling cameras and radars "full self-driving hardware" is a good marketing tactic, but without any guarantees that they can build the software for it in the next 20 years.


> My understanding of the Tesla approach is: In order to truly 'solve' self-driving (situations on the road that have never been seen before to drive safely - think unannounced construction, collision or road closures due to protests), you MUST solve 'vision' with a very, very complicated and well trained neural net (re: ridiculous intelligence in the form of a human brain as you state). In addition, the existing road infrastructure (re: signs) is all built around human vision - and so being able to identify and interpret all of that is a requirement.

Yeah this is dubious. You need to solve situational awareness. Vision is one way of doing this. Lidar is another, and lidar avoids many of the drawbacks of vision (having to do accurate world modeling based on cameras).

Tesla doesn't (and would be stupid to) feed camera data directly into a neural network. They feed multiple cameras into a complex system that involves both classical object positioning and neural networks to build a model of their surroundings. Then a downstream system consumes that model and makes decisions.

Its not a single end to end black box. Such an approach would be computationally infeasible, not to mention over-parameterized to all hell. No one does this, not Waymo and not Tesla.

While cameras are good at certain tasks (like detecting traffic lights), they are not good at all tasks, and using more specialized hardware for object detection and world modeling means that lidar based systems are strictly better. They have more information than camera based ones.

Tesla is betting on, somehow, making some breakthrough in computer vision that no one else can replicate, and further that lidar can't do what cameras can.

Your argument appears to be that since Tesla has more data, they'll achieve some eventual success, but the point is that they'd achieve more success faster with lidar, and everyone in CV seems to agree that we'd need pretty fundamental improvements in CV (and perhaps in cameras) before you get the same performance out of CV that you get out of lidar. That means that Tesla's betting on a less accurate world model being good enough. Maybe they're right, but so far we have some evidence to suggest that cameras alone have some pretty fatal shortcomings, and no evidence that Tesla has solved them.


> If you have access to LIDAR and RADAR, why not use them?

I think LIDAR is a very expensive part (~$10,000). Also, it is difficult to integrate with the car's design such that it does not make the car ugly. Tesla probably does not want to do a separate self-driving testing fleet, instead they want to use the existing Teslas on road to get training data. So they cannot use LIDAR in all cars sold to users.

I am sure if fully functional LIDAR units magically start selling for $100-$500 Tesla will start using them.


>Also, with vision, the best you can do is estimate distance, whereas with radar and LIDAR you are explicitly measuring it.

But is there any evidence you need to measure distances? We humans can navigate the world without walking into walls, so long as we're looking where we're going. For a machine to navigate the world it should be possible to do it via vision. And Tesla's do have multiple cameras to be able to measure depth.

And radar is not a backup for cameras. The resolution of the data is terrible and you can not rely on it to do any sort of driving except braking if it thinks there's an obstacle. Radar is also susceptible to problems as well, which is why Tesla's and other cars with radar can often go crazy thinking you're gonna crash randomly.


>solving the problem with Lidar isn't revolutionary per se

Solving the problem in any capacity would be revolutionary. Autonomous vehicles operating in a commercial capacity on public roads technically don't yet exist.

There's the old saying 'it's easier to optimize a working system than it is to get an optimized system working',

Outfits using Lidar (and using about an order of magnitude more compute than what's going into Tesla's HW2) are much closer to solving autonomy than Tesla, and in time these systems will get cheaper.


> you'll still be wasting your time if you need LIDAR

This is not for certain. I'm quoting Tesla's CEO, but it's suggested that LIDAR leads to optimizing for local minima. It means that LIDAR can get you 90% of the way there, but there's still going to be a lot of situations where LIDAR might not be capable.

The problem that needs to be solved is vision. Humans only have vision, and they can drive with it. Radar and LIDAR are nice-to-haves, but until you solve vision, you haven't solved self driving.


>and the delta (difference made) was quite small

But why. Because LIDAR doesn't help much in general or because the Tesla engineers aren't good at using the sensor data?

Same with the manufacturing.

Sounds to me like Tesla can't handle complexity. And if they can't handle the complexity of manufacturing, they surely can't handle the complexity of full autonomous driving.


> They are now so cheap you’re starting see them in consumer vehicles from Volvo, Mercedes, Cadillac, Nio, XPeng, BYD, etc.

I'm googling, and not finding a reference here. As far as I can tell not one of these manufacturers (or anyone else) is shipping a consumer vehicle with a LIDAR sensor. You're just citing press releases about plans, I guess?

Look, the argument goes like this:

1. LIDAR gives somewhat better depth info, and slightly inferior vision.

2. Depth info from camera devices is proven sufficient. Again, Teslas don't hit stuff due to sensor failures, period (lots of nitpicking in this thread over that framing, here I'm being a bit more precise).

3. All the remaining hard problems are about recognition and decisionmaking, all of which gets sourced from vision data, not a point cloud.

The conclusion being that LIDAR isn't worth it. It's not giving significant advantages. If you were designing an autonomy system from scratch today, you wouldn't bet it on LIDAR (especially as there's no proven off the shelf device!).


> So the tesla system has WAY more information about the environment, granted distance has to be inferred, but it also had radar to help with that.

I'd argue the contrary. Intelligence is primarily not about the amount of data, but the amount _and_ quality of data you receive. If I would have a magic sensor giving me obstacles in a segmented form, that would be couple of KB, and it would beat any other sensor on the market.

Inferring the distance from stereo images has its own failure-modes and are not easy to account for as in LiDAR. LiDAR also gives you reflectivity, so you will be able to differentiate between a UPS truck and an ambulance.

> Is that a reflection of a police car with its light on or just a window reflection?

Fun thing, to my knowledge reflections are a major unsolved problem for vision. It is easier for LiDAR, as you can rely on the distance measurement and will have an object somewhere outside of any reasonable position (e.g underground, behind a wall). Depending on the lidar, the glass might even register as a second (actually primary) return.

Yes, you need cameras (likely color) to be able recognise any light based signalling (traffic lights, ambulance/police lights...), so LiDAR is not the panacea. But having the lidar telling you that there is a window and that police is behind it is likely vastly more robust with lidar.

Also, the difficulty is that you have to see arbitrary objects, on the road and possibly stop for them. As long it is larger than maybe a couple of centimeters (or an inch), it will show on the LiDAR, with stereo vision, you need a couple of pixel texture to infer it.


The author forgot about the fact that high quality/quantity data comes from good sensors. The cost of the LIDAR sensors (the one google using and this is the one which rotates on top of the car) is pretty high (around $50K AFAIK). This is one of the reasons Google is holding back in the hope that after few years the cost of such sensors would drop dramatically. Self driving car by just using CV just wouldn't work (atleast with the current state of the art).

> For myself, LIDAR is actually the technology looking like it isn't panning out. It's not getting cheaper or more reliable.

This is just plain wrong. Lidar is getting cheaper by the year. They are now so cheap you’re starting see them in consumer vehicles from Volvo, Mercedes, Cadillac, Nio, XPeng, BYD, etc. Self driving companies have also slashed lidar costs drastically. Waymo, for example, had a 90% cost reduction for their 5th gen Lidar.

This is the opposite of a technology “not panning out”. It’s looking like Tesla is the outlier here because they made a bad bet.

> LIDAR autonomy isn't getting better nearly as fast as vision solutions are iterating.

Lidar autonomy is the only one proven to work without a driver, so this is a very strange take. We’re seeing robotaxi companies starting to expand driverless operations to big cities. Vision solutions are the only ones that still do not have a single driverless mile.

It seems like all your arguments fit into what you’re accusing others of — that you’re the one using lidar as a proxy for “not Tesla” and getting basic facts wrong.


> For bright reflective objects like traffic cones or road signs, the system is able to provide information in under a minute.

A minute..

Even if they could bring that down to 1ms, the main point still stands that Lidar can't read signs. And you'll need the tech in the car to read that stop-sign at high speed. Also the position of the stop-sign in relation to the streets is super important, so the software of positioning is also 100% required.

So the main point still stands, if you're going to need that tech, why do you need the Lidar, since you'll need to stop the car the moment the cams fail to work.

As a side note: I also I wonder if it would still work if every car at a crossing would have this tech (in a fog), how would the cars know that the laser light was coming from their car, and not form any of the other cars? Would have to morse-code an ID in there or something perhaps.

Second side note: It's a tech discussion, it sometimes seems people bring personal views about Elon Musk and Tesla into these discussions. I think this clouds the judgement.


> Say you're driving down the road and there's a bit of construction, there's a guy holding SLOW/STOP sign directing traffic. LIDAR will tell you it's a hexagonal sign, but it can't tell you what it says, you need a camera to read the sign and tell you what it says.

But Waymo never said you don't need cameras. Hell, they have 29 cameras in each vehicle compared to Tesla's 8.

Your point about their approaches being more alike than different is somewhat true, but you wrongly attribute the LiDAR vs camera debate to Waymo marketing. It's Elon and Tesla fans who started it and incessantly repeat it even to this day. Most rational folks say use whatever you can to get it working (which Waymo did) and optimize later.


> I think you're also missing why Tesla have opted to invest in a percept suite that doesn't use LIDAR. That reason is cost.

Even beyond pure cost, one problem Tesla has is that they already SOLD tens of thousands of FSD option packages, predicated on NOT needing to retrofit the cars with LIDAR.

So arguably, Tesla might be worse off if they deliver a LIDAR based FSD (and need to retrofit tens of thousands of cars, or pay off the owners), than if they just plod on with a camera based FSD that never quite works safely.


> LiDAR doesn’t tell you that it’s a bag on the road vs a raccoon

I think that's largely an issue with the early LIDAR devices today, but not necessarily what may be to come.

There's something to be said about measuring actual data with solid physics vs. inferring distances with billions of operations on RGB data. If you were landing a commercial aircraft in fog, you most certainly don't rely on your eyes to do most of it, but it is in fact possible to do safely precisely because we do have good sensors on them.

I fully agree with leveraging the scale and maturity of RGB sensors for cars today, the talk is spot on about that, but that's (a) circling back to the fact that Tesla needs to sell cars now not next year and (b) not a good case against use of LIDAR in the future.


what Musk wants/will do is largely irrelevant to the eventual success of self driving vehicles. pardon the use of his pet acronym FSD but what will make the technology, not Tesla, successful is a build environment that a car can interact with more reliably as well as a car that can interrogate the built environment more effectively.

Both are important. A long range RFID tag in every road sign means I don't need a camera to read it, and it can't fail in snow. Magnetic nails[1] in the highway that can aid with high speed lane positioning information absolutely beats the camera on my Volvo that struggles to see white lines when driving into the sun.

[1] https://trid.trb.org/view/574206


I saw the title and google link and thought it would be someone from Google talking about the LIDAR system they use in their self-driving cars. I'd actually just read an article that said they cost $70,000 each so I was thinking "Maybe they have a way to drastically reduce the price or are going to say why they can't".
next

Legal | privacy