Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Google has been implementing "fairness" models for a probably going on a decade in their self driving cars. the first example I can remember is them talking about intentionally nosing into a 4 way stop to signal to other drivers that the vehicle understood that it was it's turn to go. Not to say its perfect but the car is "thinking" along those terms.


sort by: page size:

I think there was some discussion about the ethics of the software in google's cars because they would solve driving problems with a view to minimizing harm to all involved, not merely the car's occupants. So I think this kind of thing is very much in development, and indeed more sophisticated than that (e.g. The car might swerve into another vehicle to avoid hitting pedestrians).

On a related note, here's Chris Urmson's long talk at SXSW on how Google does automatic driving.[1] This is worth watching. He shows what Google cars sense and gives an overview of how they analyze the data. This is the first time Google has released this level of detail. If you're going to comment on automatic driving, you need to see this.

Urmson goes into great detail on Google's 2mph collision with a bus. The sensor data from the Google car is shown. The video from the bus (it had cameras, too) is shown. Exactly what the software was trying to do is discussed. The assumptions the software made about what the bus driver would do are discussed. What they did to prevent this from happening again is mentioned.

Most of the talk is about the hard cases. In the beginning, Google developed a highway driving system, but that was years ago. Now they're working hard on dealing with everything that can happen on a road, including someone in an electric wheelchair chasing ducks with a broom.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik


Google had to program their car to stop treating 4 way stops exactly as the law required, and instead pretend to be a little aggressive: https://www.nytimes.com/2015/09/02/technology/personaltech/g...

Google is pretty much working on this: telling a car to go where I want it to go.

"you need to make a product people can use as fast as possible."

Isn't this what many car companies already do? You have individual features, such as anti-lock braking (decades old), auto parking, lane departure warning and its improvement lane assist, brake assist, etc that get put into cars that aren't fully robotic yet.

Except for the PR angle, I still don't see what would make Google's approach a fundamentally better one. That may be because of lack of objective evaluations, though. Anybody know of any?


I think google as a whole understands this concept. Isn't that why they've held off on the release of their self-driving car? I can't remember where I read this from but I remember an interview where someone said they didn't want to release their self-driving car to the world until it was X-times better than the average human driver, and extremely competitive with its price.

You're right! Google could also, in the mean time, start providing ridesharing services with their self driving vehicles (note 1: they test them in Austin, in addition to Mountain View; note 2: Google Ventures invested in Uber, not Google proper, Google and Uber are competitors in the self-driving vehicle space). But we're just speculating at this point. Anything could happen!

> However, it isn't particularly "fair" but they voted againest their own interests (if they wanted ridesharing).

I argue this is subjective (they voted "against their own interests"). They voted for more knowledge about their drivers (something they can control). One should not need to forfeit additional safety regulations to a tech company due to irresponsible intoxicated/influenced drivers being used as the proverbial boogyman.


That is all good when it's the first to do it like google was back in the day. These days everyone and their dog is doing a self driving car so they have to fall in line with the rules.

Yeah, I recall Waymo bragging they could do this years ago... a few months before a Google engineer reported to a Slate reporter that a Google Self-Driving Car would happily run any red light that wasn't in its map in advance...

The long tail of edge cases in self-driving is vast, and each one needs to be individually addressed.


It's interesting that they portray self-driving capabilities as something that can be turned on or off, unlike Google's where it's just always on.

I think in the long run Google might be building the correct solution for greater number of people.


More details would be helpful. If this goes to court, the video from the self-driving vehicle would be interesting.

An important question is whether their system is smart enough to take evasive action.

It's becoming clear that there are two ways to approach self-driving. The first stems from the DARPA Grand Challenge, which was about off-road driving. For that, the vehicles had to profile the terrain, plotting a path around obstacles, potholes, and cliff edges. The GPS route was just general guidance on where to go. That's the approach Google took, as can be seen from their SXSW videos. Google also identifies moving objects and tries to classify them. With all that capability, it's possible to take evasive action if some other road user is a threat. The control system has situational awareness and knows where there's clear space for escape.

The other approach is to start with lane following and automatic cruise control, and try to build them up into self-driving. This can be done entirely with vision systems. That's the Cruise Automation and Tesla approach. This puts the car on a track defined by lines on the pavement, with lane changes and intersections handled as a special case. There usually isn't a full terrain profile; that requires LIDAR. So there isn't enough info to plan an emergency maneuver for collision avoidance.

This distinction is not well understood, and it should be.


These principles seem to have already been wholesomely applied to Google then Waymo's approach to self-driving cars. This is in stark contrast to the approaches used by competitors such as Uber and Tesla who appear to favor capturing market first and foremost.

It seems a narrow view to take to assume Google's only AI project with mortal human consequence is Maven, then using that narrow view to confirm your own negative bias about profit, perception, disingenuity.


Google has taken the approach they have very deliberately. They believe in full self driving only. It's not that they couldn't do lane changing and autopilot. It's that they believed it was the wrong approach. No steering wheel, no dependence on a human driver is what they want.

Google self driving car : Tesla self driving :: Google AI : OpenAI AI.

Google is taking a very conservative approach, but their work is years ahead of groups getting far more press.


I think Google are doing a lot on automated driving as well as Tesla.

I think they're currently concerned with making sure the vehicle drives safely. Many humans apply evasive maneuvers, only to end up killing someone else or hurting themselves in other ways.

All this shows is that the Google car was driving well, and you weren't. Though I'm sure as the tech progresses they'll look into this sort of thing, and will implement what makes sense.


Google has a completely different strategy. They're aiming for 100% automation via a more gradual testing process before making any sort of automation available to consumers.

I believe their thought is that the closer we get to complete automation, the less likely a driver is to remain aware. We're only at roughly 10% automation right now and drivers are already taking their eyes off the road. When we get to 90%, humans will have an even tougher time retaining attentiveness. Watch their talks to be sure.

Google's self driving car group gives some awesome transparency reports every month, including details about every accident [1]. It's like they're ready to become their own company.

Fortune had a good article critiquing Tesla's strategy vs. Google's and other car companies' [2]. The author says Tesla is being defensive and resistant to public critique. Other companies expect pushback from the public and incorporate that into their product offerings.

[1] https://www.google.com/selfdrivingcar/reports/

[2] http://fortune.com/2016/07/11/elon-musk-tesla-self-driving-c...


Google has cars that drive themselves in stop and go traffic with pedestrians present already. Next Generation? Closer than you think.

Did Google demo something that car companies haven't been working on for many years, and demonstrated years ago?
next

Legal | privacy