Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>Even if the self-driving system is relatively good, is the human+self-driving system really better?

Say for example, the designers never trained the car's system to recognize someone on a skateboard as an obstacle.

Obviously a lidar based system would spot the skater, but a camera+ai system depends on being trained to identify each type of obstacle as an obstacle and avoid it.



view as:

A person on a skateboard is still a person, would any AI trained to spot people not recognise that (at the same level as it spots people in general)?

Do you really need to train your AI to spot people on skateboards vs rollerskates vs rollerblades vs heelies vs sliding on ice vs traveling on a travelator vs standing on a moving vehicle vs ...?

OK, being able to recognise the differences and act accordingly might be useful but the principle of "person getting closer to vehicle" should hold sway for most situations?!?


> "person getting closer to vehicle" should hold sway for most situations?!?

I think the big thing is that not all of these 'persons' move in the same way. A person in the exact same location could both be a 'hazard' and 'not a hazard' based solely on their velocity, and their ability to change direction quickly (and are they chasing after something which isn't a hazard, but which is intercepting your driving path). They don't just have to identify the person, they have to be capable of forecasting that person's movement.

IOW, someone walking and someone on a skateboard are different risks, because the skateboard could move into your path of travel before your pass them, whereas someone walking would not.


Yes, but the skateboard is irrelevant is the predicted trajectory. I can see how it's useful to analyse the mode of movement to try to improve predictions but the human moving towards you at X speed should be the principle analysis; it seems, naively, to me; rather than worrying if they're on a skateboard or a snakeboard, etc..

The skateboard is relevant because it could theoretically make a neural network fail to identify a person that needs to be avoided

>human moving towards you at X speed

Lidar systems will work this way. Camera+AI not necessarily, it will still need a way to sense relative speed otherwise you are banking on an AI to identify an obstacle in an image.


Uber's system famously couldn't decide if someone pushing a bicycle was a pedestrian or a cyclist. It flip-flopped for a while before resolving the dichotomy by running her over.

I'm not familiar with the instance you're referring to, but at face value I don't understand the problem. It doesn't matter if it's a pedestrian or a cyclist—in both cases, the car should avoid crashing into them.

The AI failed to classify what it saw as an obstacle to be avoided because it hadn't been trained for that particular instance.

More precisely, it wasn't program to recognize confusing objects as a hazard

It shouldn't matter. But the system was designed to track trajectories of recognised objects. So every time it changed its mind about what it was looking at it started recalculating the trajectory. By the time it settled down it was too late to avoid the fatal accident.

Don't assume that self-driving software is designed with even a minimal level of competence.

Footnote: there have been many articles about this incident posted to HN. Here's one: https://news.ycombinator.com/item?id=19333239


Legal | privacy