Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

Granted, we're not quite ready for self driving, but there's no question that the neural network subfield of ML has absolutely exploded in the last 5-10 years and is bursting with productionizable, application ready architectures that are already solving real world problems.



view as:

You sound like an NVIDIA salesperson trying to sell me on a $3000 Titan ;)

There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff.

I also have no doubt that research activity has exploded, which might be related to the generous hardware grants being handed out...

But all that research has produced surprisingly little progress over algorithms from 2004 in the field of optical flow.

The papers I looked at were the top-ranked optical flow algorithms on Sintel and KITTY. So those were the AIs that work best in practice and better than 99% of the other AI solutions.


While it's not my area of expertise, I am a bit wary of contest results. It seems like an exercise in overfitting via multiple comparisons? Maybe some algorithms with a slightly lower rank are actually more robust?

If it's as bad as you say, it seems like a critical evaluation would be pretty interesting and advance the field.


I wonder how many solutions within the AI field could just be categorized as "Automation"

All of them. That's how AI works. Not by making smarter machines, but by destroying intelligence by smashing it into machine-digestible bits.

That's what caused the first "AI Winter": Rules-based "AI" engines became what we call "Business Rules". AI didn't go away - it just stopped being an over-valued set of if-then logic and slots (value-driven triggers) with a cool set of GUI tools to build rule networks.

Source: Used to work for an 80s-era "AI Company"


>There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff

We're way past memorization. We're into interpolation and extrapolation in high D spaces with Bayesian parameters. Sentiment analysis and contextual searches - search by idea, not keyword. Heuristic decision making. Massively accelerated 3D modeling with 99% accuracy. Generative nets for text, music, scientific models...

Sorry, but you're behind the times, and that's ok - one of us will be proven right in the next 1-5 years. Based solely on the work we're doing at the startup I'm working for, we're on the cusp of an ML revolution. Time will tell, but personally I'm pretty excited. And don't worry, I'm not working in adtech or anything useless.

That said, the driving problem seems to be quite far from being solved, I agree though it is outside my expertise; but I think the primary issue is that this is an application where error must be unrealistically low, a constraint which does not apply to many other domains. You can get away with a couple percent of uncertainty when people's lives aren't on the line!


Would you be willing to link to some papers and cite some specific algos to play with? The above cited specific algos. What are the new versions of these named algorithms?

> Granted, we're not quite ready for self driving

And yet it's literally in cars on the road.

I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.


Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone). The error rate of human drivers tends to be downplayed, while the perceived risks of automated driving is exaggerated, distorting the picture.

Yes, it's a hard problem, yes we are not nearly there and there is a lot of development/research to do. Yes, accidents will happen during the process. But humans suck at driving and kill themselves and other people daily. It's the least safe form of transportation we have.

[1] https://www.tesla.com/VehicleSafetyReport?redirect=no


The gross human fatal accident rate is ~7 accidents per billion miles in the US, including fatalities caused by incompetent or irresponsible drivers, and substantially lower in Europe. But humans drive a lot of miles

Based on Tesla's safety report, 'more than 1 billion' miles have been driven using autopilot. Given the small data sample and the fatalities already attributed to autopilot, I think we're some way from proving it's safer than letting drivers drive alone, never mind close to being a driver substitute.


Marginal Revolution just highlighted an interesting detailed look at US car (driver) fatalities. https://marginalrevolution.com/marginalrevolution/2020/02/wh...

>> After accounting for freeways (18%) and intersections and junctions (20%), we’re still left with more than 60% of drivers killed in automotive accidents left accounted for.

>> It turns out that drivers killed on rural roads with 2 lanes (i.e., one lane in each direction divided by a double yellow line) accounts for a staggering 38% of total mortality. This number would actually be higher, except to keep the three categories we have mutually exclusive, we backed out any intersection-related driver deaths on these roads and any killed on 2-lane rural roads that were classified as “freeway.”

>> In drivers killed on 2-lane rural roads, 50% involved a driver not wearing a seat belt. Close to 40% have alcohol in their system and nearly 90% of these drivers were over the legal limit of 0.08 g/dL.

I don't think people give enough attention to whether broad statistics actually apply to cases of interest. That's about 40% of all driver fatalities occurring on rural non-freeway roads, of which 35% (~14% overall) were legally driving drunk.

People compare various fatality rates associated with riding an airplane vs driving a car all the time, but I've never seen anyone point out that an incredibly simple mitigation you're probably already doing -- not driving on non-freeway rural roads -- lowers your risk of dying in a car accident by more than a third. And it gets even better if you're not driving drunk!

If you measure driving quality in terms of fatality rate, it is actually the case that almost everyone is better than average. A lot better than average. But public discussion completely misses this, because we prefer to aggregate unlike with unlike.


You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

If half of all driving occurs on highways and half doesn’t, and half of all accidents are on highways, then avoiding highways will have absolutely no effect on your accident rate.

It’s possible that driving on these roads leads to a disproportionate accident rate, but you haven’t actually said that.


True. I think there's plenty of non-statistical reason to believe you can reduce your risk of death by not being one of the 50% of drivers involved in accidents on those roads who weren't wearing a seat belt or ~35% who are over the drink drive limit though.

> You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

You're right in spirit. I actually addressed this in passing in the comment "an incredibly simple mitigation you're probably already doing". Rural roads carry less traffic than non-rural roads for the very obvious reason that most people don't live in rural areas. The disparity is documented: https://www.ncsl.org/research/transportation/traffic-safety-...

We can also note that freeway vehicle-miles (excluded from this rural roads statistic) are going to be an inflated share of driven miles precisely because the purpose of the freeway is to cover long distances.

But as to the specific number I provided ("more than a third"), you're on target in accusing me of a fallacy.


What a surprise that Tesla's report would say that!

Have they released all the data to be analyzed by independent people?

Also autopilot only runs in the best of conditions. Are they comparing apples to apples?


That report is comparing humans driving in all conditions vs autopilot driving in only the best conditions. Humans are deciding when it is safe enough to turn autopilot on. So no, it is not less dangerous.

That's not what the report is comparing at all. The report is comparing all vehicles driving in all conditions vs Teslas driving in all conditions (separate for with and without autopilot).

The numbers show that Teslas experience a lower crash rate than other vehicles. Granted, this can be to a number of reasons including the hypothesis that humans deciding to buy Teslas drive more carefully to begin with. And the numbers show that turning on autopilot further reduces crash rates.

This at least tells us that letting the vehicles with the automated driving and safety features on the road doesn't increase the risk for the driver and others, which was the original premise I responded to.


There's a million hidden variables here that could explain the difference:

- The mechanical state of the car (Teslas with autopilot tend to be new/newish vehicles, and thus in excellent mechanical shape)

- The age and competence of the driver - I'm guessing people who make enough to buy a Tesla are usually not senile 80 years olds or irresponsile 18 year olds

- Other security gizmos in Teslas that cheaper cars may lack

Overall, it would be more fair to compare against cars of similar age and at similar price point.


And how much have Teslas driven in snowy fog in the mountains on autopilot?

I think the tricky part is that at some level you want to be comparing counterfactuals. That is, accident rates of Teslas on autopilot with a driver of Tesla-driver abilities, in road conditions where the accidents occur, and so forth.

It kinda seems self evident that a car that drives you into a wall randomly is less safe than one that doesn't.

I grant that Teslas might be safer than eg a drunk driver, and so we might be better off replacing all cars with Teslas in some sense, but we'd also be better off if we replaced drunk drivers with sober ones. But would safe, competent drivers be safer, and would that be ethical? At that point are you penalizing safe competent drivers?

Drunk drivers in Teslas are actually interesting for me to think about, because I suspect they'd inappropriately disengage autopilot at nontrivial rates. I'm not sure what that says but it seems significant to me in thinking about issues. To me it maybe suggests autopilot should be used as a feature of last resort, like "turn this on if you're unable to drive safely and comply with driving laws." But then shouldn't you just not be behind the wheel, and find someone who can?


Beware of the No True Scotsman fallacy. A human who drove into a wall could not possibly have been a Safe, Competent Driver, could they? A True Safe, Competent Driver would never drive into a wall.

Unless you're serious about bringing the bar way up for getting a driver's license, I think it's fair to compare self-driving technology with real humans, including the unsafe and incompetent. In most of the world, even those caught red-handed driving drunk are eventually allowed to drive again.


>Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone)

You mean the company that has staked it's future on selling this technology claims the technology is better than any alternative?

This is aside from the fact that the NHTSA says the claim of "safest ever" is nonsense and that there is zero data in that PR blog post.


Is there any chance that tesla is lying with statistics?

A fun example, someone was selling some meat, he said it is 50% rabbit and 50% horse because he used 1 rabbit and 1 horse. The conclusion is when you read some statistics you want to find the actual data and find if statistics are used correctly, most of the time as in this case the people doing the statistics are manipulating you.

There was an article about a city in Norway with 0 deaths in 2019, if I would limit my statistics to that city only, to that year only I will get the number of 0 people killed by human drivers.


I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

Were not at the self-driving level of kicking back the seat and watching netflix on your phone yet.

I doubt we will ever get there; there will always be edge cases which are difficult for a computer to grasp. Faded lane marking, some non-self-driving car doing something totally unexpected, extreme weather conditions limiting visibility for the camera's etc.


> I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

-This is the scariest bit, IMHO. Basically, autopilot is well enough developed to mostly work under normal conditions; humans aren't very good at staying alert for extended periods of time just monitoring something which mostly minds its own business.

Result being that the 'assist' runs the show until it suddenly veers off the road or into a concrete barrier, bicyclist, whatever. 'Driver' then blames autopilot; autopilot supplier blames driver, stating autopilot is just an aid, not a proper autopilot.

This is the worst of both worlds. Driver aids should either be just that - aids, in that they ease the cognitive burden, but still require you to pay attention and intervene at all times - or you shouldn't be a driver anymore, but a passenger. Today's 'It mostly works, except occasionally when it doesn't' is terrifying.


This "driver aid" model itself is starting to sound like a problem to me. You either have safe, autonomous driving or you don't.

A model where a driver is assumed to disengage attention, etc but then be expected to rengage in a fraction of a second to respond to an anomalous event is fundamentally at its core flawed I think. It's like asking a human to drive and not drive at the same time. Most driving laws assume a driver should be alert and at the wheel; this is what...? Assuming you're not alert and at the wheel?

As you're pointing out, this leads to a convenient out legally for the manufacturer, who can just say "you weren't using it correctly."

I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.


It's only a problem if you believe in driverless cars, then it becomes a Hard Problem: "it works in situations where it's irrelevant", but so does plain old not holding the wheel: look, it's self-driving!* (in ideal conditions)

Reminds me of Bart Simpson engaging cruise control assuming its something like an autopilot. Goes good for a little while haha.

> I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.

-The cynic in me suggests we need autopilot as a testbed on the way to the holy grail of Level 5 autonomous vehicles.

The engineer in me fears that problem may be a tad too difficult to solve given existing infrastructure - that is, we'd probably need to retrofit all sorts of sensors and beacons and whatnot to roads in order to help the vehicles travelling on it.


Road sensors ain't gonna fix the long tail of L5. We can't even upkeep roads as is, like crash attenuators, which would have mitigated the fatality in OP article.

Also, highway lane splits are very dangerous in general. It's a concrete spear with 70mph cars whizzing right towards it. Around here, they just use barrels of material, sand I believe. Somebody crashes into one, they clean the wreck, and lug out some more sand barrels. Easy and quick.


It isn't the SOLE action for L5 to be feasible, but I believe it is a REQUIRED action. (Emphasis added not to insinuate you'd need it, but rather to show, well, my emphasis. :))

For the foreseeable future, there's simply too many variables outside autopilot manufacturers' control; I cannot see how car-borne sensors alone will be able to provide the level of confidence needed to do L5 safely.

Oh, and a mix of self-driving and bipedal, carbon-based-driven ones on the roads does not do anything to make it simpler, as those bipedal, carbon-based drivers tend to do unpredictable things every now and then. It'll probably be easier when (if) all cars are L5.


I see this stated often, that humans are unpredictable drivers. What's the proof that automated systems will be predictable? They too will be dealing with a huge number of variables, and trying to interpret things like intent etc.

Yes, automated systems will also do unpredictable things - the point I was (poorly, as it were) trying to make was that the mix of autopilots and humans are likely to create new problems; without being able to dig it out now, I remember a study which found that humans had problems interacting with autonomous vehicles as the latter never fudged their way through traffic like a human would - say, approaching a traffic light, noting it turned yellow - then coming to a hard stop, whereas a human driver would likely just scoot through the intersection on yellow. Result - autonomous vehicles got rear-ended much more frequently than normal ones.

So - humans need to adapt to new behaviour from other vehicles on the road.

When ALL vehicles are L5, though, they (hopefully) will all obey the same rules and be able to communicate intent and negotiate who goes where when /prior/ to occupying the same space at the same time...


I think that unless a single form of AI is dictated for all vehicles, we can't safely make the assumption that autonomous vehicles will obey the same rules. Hell, we can't even get computer to obey the same rules now, either programmatically or at a physic level.

-That is a very valid point.

And, of course, they should all obey the same rules (well, traffic regulations being one, but also how they handle the unexpected - it would be a tough sell for a manufacturer who rather damaged the vehicle than other objects in the vicinity in the event of pending collision if other manufacturers didn't follow suit...

Autonomous Mad Max-style vehicles probably isn't a good thing. :/


Which is why most car companies long ago said they wanted to skip level 3 and go direct to level 4. With level 4 when the car can't drive it will stop and give the human plenty of time to take over.

The weird thing is there seems to be a discrepancy between these publicized figures of millions of miles of auto-pilot on the roads, and the general feeling you get when you turn on the system yourself. I've used it on a Model 3 and it at least feels horribly insecure, the lines used to show detection of the curbs are far from stable and jitters around often, maybe it's more safe than it seems, but the feeling is I would absolutely not put my life in the hands of such a system... just looking at all the YouTube videos of enthusiasts driving around the countryside with autopilot and it's like watching a game of Russian roulette. Suddenly the car starts driving along the other side of the road or veer off into a house.. I would categorize it as a glorified lane-assist system, in its current state.

Even Tesla's marketing copy describes it that way, so I don't think you are to far off.

>Autopilot enables your car to steer, accelerate and brake automatically within its lane.

>Current Autopilot features require active driver supervision and do not make the vehicle autonomous.


> And yet it's literally in cars on the road.

It is not. There is no real self-driving on the road, at least not in conventional vehicles. Teslas autopilot is basically a collection of conventional assistive systems that work under specific circumstances. Granted, the circumstances where it works are much broader than the ones defined by the competition, but for a practical use-case its still very restricted. Self-driving systems can be affected by very minor changes in lighting, weather and other circumstances. While Teslas stats on x Million miles driven under Autopilot are impressive, they do not show the real capabilities of the self-driving system. For example, you can only enable the Autopilot under specific circumstance for example while driving on an Autobahn with clear weather. Under circumstances with for example limited sight the Autopilot won't turn on or will hand over to the driver, simply because it would fail. Of course, this is for passenger security, but these are situations real self-driving vehicles need to handle. Other leading projects like Waymo also test the vehicles under ideal circumstances with clear weather etc.

We'll most likely see fully self-driving vehicles in the future, but this future is probably not as close as Tesla PR makes us think.


> There is no real self-driving on the road

Emphasis on real. There is definitely something that most people would refer to as "self driving" in cars on the road.

I'm not saying what is there is specifically good at what it does - I'm saying someone put it into use regardless of how fit for purpose it is.

> but this future is probably not as close as Tesla PR makes us think

Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin".


> Emphasis on real. There is definitely something that most people would refer to as "self-driving" in cars on the road.

Then you'd have to define what a self-driving car actually means. At least for me, self-driving means level 4 upwards. Everything below I'd consider assisted driving.

> Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin". As I said, this Smart Summon feature also only works under very specific circumstances with multiple restrictions (and from what I've seen on Twitter it received mixed feedback)

Just because the car manages to navigate a parking lot with 5km/h relatively reliable, that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

Edit: Fixed my formatting


> Then you'd have to define what a self-driving car actually means.

I said "for most people". For most people I know, a car that will change lanes, navigate freeways and even exit freeways is "self driving". It may be completely unreliable but even a toaster that always burns the bread is called a toaster: no one says "you need to define what a toaster means to you".

> At least for me, self-driving means level 4 upwards.

I have literally zero clue what the first three "levels" are or what "level 4" means, and I'd wager 99% of people who buy cars wouldn't either.

> that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

30 seconds found me a video on youtube of some British guy using autopilot at 150kph on a German autobahn last June.

Again: I'm not suggesting that it is a reliable "self driving car". I'm suggesting that it is sold and perceived as a car that can drive itself, in spite of a laundry list of caveats and restrictions.


>It may be completely unreliable but even a toaster that always burns the bread is called a toaster

This argument is leaning toward the ridiculous.

I think only you and Elon Musk consider a "greater than zero chance of making it to your destination without intervention" to be self-driving.


And I think you're being ridiculously pedantic if you think that a list of caveats and asterisks in the fine print means that average Joe B. Motorist doesn't view the Autopilot/Summon/etc features as some degree of "self driving".

Musk has good reason -- he's been selling an expensive "full self driving" package for a couple years and in order to deliver he needs to redefine the term. He's already working hard on that.

Car engineers know what level 1-5 is. Level 1 and 2 are basic assist - cruise control and the like. Level 3 is the car drives but the driver monitors everything for issues the car can't detect. 4 and 5 are you can go to sleep, 4 means there are some situations the car will wake you up and after you get a coffee (ie there is no need for instant take over) you drive, while 5 is the car will drive anything.

BTW, I would very much like to see progress in optical flow because I could really use it for one of my projects.

If you know any good paper that tries a novel approach and doesn't just recycle the old SSIM+pyramid loss, please post the title or DOI :)


There is 40-50% drop in precision with state of the art results if test images are ill-formed. The imagenet dataset used is far from ready for real world use cases. A bunch of IBM and MIT researchers are trying to fix this - https://objectnet.dev/

As in, "only works in great visibility on a perfectly spherical road"? That does seem an appropriate summary.

> I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

I disagree, I saw such an awful number of bugs in ML codes going with papers that I now take for granted that there is a bug somewhere and hope that it does not impact the concept being evaluated.

(here having everyone use python, a language that will try its best to run what you throw at him, feels like a very bad idea)


> Granted, we're not quite ready for self driving

if it was the case self driving cars wouldn't be on the road, I don't think we should aim for perfection, perfection will come. We should be looking for cars that make less errors on average than humans, once you have that you can start putting cars on the road and use data from the fleet to correct remaining errors.


Humans have an average of 1 fatality per 100 million miles driven. No one is anywhere close to 100 million miles without intervention.

Are there any fully autonomous cars on public roads with no driver that can intervene? Seems like only maybe in tightly constrained situations are we ready.

I don't think I mentioned FULLY autonomous cars, my point was: something doesn't have to be perfect before we have to use it, but I probably didn't express myself correctly

I think that the necessity of intervening drivers atm indicates that we aren't at that point yet, even if that point is far from perfection, and also that the reason any self-driving cars are on the road is because of the fairly loose but significant requirements from regulation. We might be at that point in otherwise very dangerous situations, like if I was very tired or drunk, but otherwise I don't know that I'd have so much faith in software engineers to completely control my car.

Legal | privacy