The run part: I think they're a young overseas resident who panicked. They returned and handed themselves in to police at the scene.
A lot of younger foreigners have no idea what to do in an accident and in their home economy can face retribution, extortion, sometimes at the hands of police. In Australia, you stay, you help the victim, you report the accident. That's the law. In theory all drivers should know this but you get a year on your homeland licence before having to sit a local road rules exam.
I have nothing to add regarding self driving. Well I do: I think it's been misnamed and oversold.
Yep. I marvel at the "Machine Learning AI ad algorithms" that try to sell me junk. I get 3 (three!) emails a day from aliexpress about stuff i already bought ... from aliexpress. Same thing with Amazon. I bought a, lets say "toaster" from amazon, and for the next 6 months every weird amazon ad on the wide internet and every time i hit the amazon home page it's trying to sell me toasters.
I don't even see it like "it's like a child, it finds something interesting and repeats it" because even a child gives up after a week. maybe a lot of people don't see "AI" as "old tech" - even though some of the first real "advances" - and i say that with a wry smile - were documented 40 years ago or more.
It's complete and utter snake oil. There are some cute toys that AI/ML can do, i can clone certain voices passably, for instance. There's "deepfakes" where a screenshot is enough to maybe fool someone not wearing their glasses at 3 meters, on a smart phone screen. There's obviously the AI in video games, etc.
But we're a good 25 years out from a truly autonomous, road-worthy, self-driving human transport.
Someone will always have to "look over the shoulder" until then.
The day i can hire a private self-driving taxi/limo to pick me up at my house and drive me to an arbitrary location, at any time of day or year, is the day i start to tinfoil my house to prevent the grey goo from coming inside.
edit: i should note that i would ab-so-loutely love a car service that was cheaper than hiring a limo for a week, if you can even do that. If the car is driving by itself, i can sleep while it gets from interesting place in the US to interesting place, and spend most of my time awake documenting and photographing and interviewing people, rather than hating interstate 10, 20, and 40; which is how most of my "road trips" for the past 20 years have gone. I've traveled this country "a lot", and i rarely have cute stories about places i've been to because i've either been completely wiped and just slept there somewhere, or i was in a hurry to beat the sun/state police/whatever through the state. I'd love to spend a few weeks in wyoming and montana and wisconsin, but if i have to drive it, the only thing i'll publish is a couple of shots of mountains, the kids posing in front of some tourist trap, and that's about it.
I passively consume, yet nearly universally ignore and eschew all "AI" news.
Where AI has made really great advancements is all the stuff that you can memorize.
Like GitHub Co-Pilot reurgitating John Carmack's source code. Also, stochastic language models have reached the point where they seem like magic, e.g. automated YouTube subtitles and translations, or text to speech. But that all works by memorizing word fragments (CTC) and/or memorizing sentence fragments (n-Gram).
> But we're a good 25 years out from a truly autonomous, road-worthy, self-driving human transport.
It sounds like your reasoning goes like this: my interactions with things which are supposedly AI are bad. Self-driving cars are also some form of AI, therefore self-driving cars must be bad and or far away too.
Did I understood you right?
The problem with that logic is that there is nothing in common in implementation/architecture/incentives between the things you mention and self-driving technology.
Nobody, well nearly nobody, tries to implement self-driving cars in a blackbox “AI” fashion. What I mean is that we don’t just throw sensor data at a neural network, then squint and say “i recon it is going to drive well now”. That would be madness.
Most approaches break down the problem into sub-problems covered by sub-systems. The sub-systems are fed information with known error properties and engineered to the specifications. The failure modes are painstakingly traced through and documented. Then in turn assemblies of these subsystems and the whole are reasoned similarly. Fault trees are drawn, the operational domain is considered. The reasoning why the engineers think the system is safe and have the right redundancies in place is more complicated than the code itself.
Some of these sub-systems are implemented using what one would call “AI”. Particularly in the object recognition domain that seems to be the state of the art. But the failure modes and shortcomings of these systems are considered and reasoned about the same way you would do the same with a good old-fashioned kalman-filter based sub-system. It is known that they are going to fail in various situations in various ways. The trick is to engineer the whole system such that it still remains safe despite these sub-systems having these characteristics.
I’m not saying that we will have safe self-driving soon. What i’m saying is that you can’t reason about self-driving cars by saying “commercial entity X is spamming me with bad marketing crap. People talk about AI behind said marketing crap. Therefore self driving cars are far away.”
>But the failure modes and shortcomings of these systems are considered and reasoned about the same way you would do the same with a good old-fashioned kalman-filter based sub-system.
My personal definition of what constitutes "AI" includes emergent behavior as a prerequisite. To lack emergence is to be a programmable logic controller.
Do these systems you're talking about exhibit emergence?
If so, how can you possibly reason about their failure modes?
They don't, that's why they came up with general AI (good luck with that) vs specific AI (machine learning), so they can keep milking VCs for money via buzzwords.
The self driving car won't be human like - merely a set of algorithms using trained networks to solve problems.
> Do these systems you're talking about exhibit emergence?
What is your definition of emergence? I found this one on wikipedia:
“In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole.”
If i go by this then yes such a system exhibits emergence. The whole has the property of driving the car while the parts alone don’t. But so does an elevator: neither the winch, the cable, or the shaft lifts people up alone, but the whole of it together does.
> If so, how can you possibly reason about their failure modes?
Same way you do it about an elevator? Its “just” much more complex because the operational domain is much more complex.
> I think they're a young overseas resident who panicked.
Ok, I need to respond. Unless I missed something, there's nothing to suggest that the person involved is an overseas resident.
> but you get a year on your homeland licence before having to sit a local road rules exam.
P-plates suggests she's doing a local driving licence. If you area foreigner from quite a few countries you just show your existing driving licence and get a Vic one without any exams and without p-plates.
They're likely going on this, which in local media usually implies they're fleeing back to a home country: "Sakshi Agrawal has been granted conditional bail, despite police protesting she poses a flight risk."
And/or from another local article:
"The 23-year-old, who is on a bridging visa, was released on bail despite concerns from police prosecutors she could flee the country."
"The court heard that Ms Agrawal, whose driver’s licence had previously been suspended..."
You're right the linked article doesn't mention it.
If you search the drivers name and Tesla, other articles do claim that she is not a citizen or permanent resident of Australia and is there on a bridging visa.
Which is exactly what I did: I had read other local news stories about her initial court appearance where a police case was raised she might be a flight risk.
> Ok, I need to call this out. Unless I missed something, there's nothing to suggest that the person involved is an overseas resident.
Just based on the picture I'd be pretty confident she is. I find it fairly easy to pick out young people from Asia who are recent immigrants based on their dress and hairstyles.
I'm with you on the fashion bit but I'm a recent immigrant (to Australia, as it happens) and the driving culture here feels a lot saner than where I'm from. Being asked for bribes, ignoring pedestrian lights and crosswalks, queuing across and blocking intersections, etc.
No, that's not controversial. The controversial thought chain is: she looks foreign, therefore she is foreign, therefore she learned the bad foreign driving culture, therefore she ran from the accident. It may well be true! But those are assumptions posted only because of the photo / her name. If it was my photo, we wouldn't end up with this thread.
I hope you don't write software with that kind of logic. You cannot follow a law that you are ignorant of. The word you're looking for is "no defence". Even that is debatable, subject to the reading of relevant applicable laws. Let's leave laws to lawyers and judges, shall we?
>In Australia, you stay, you help the victim, you report the accident. That's the law.
I don't think Law is much different in any country, maybe in some countries it is not respected due to the possibilities of retribution/extortion you mention.
It's easy to make low effort quips about the letter of the law but we all know that even in the first world customary practice can be far more nuanced than the letter of the law. And the gap only gets wider from there.
> A hit-and-run incident in Melbourne yesterday could set a legal precedent for the use of autonomous driving technologies in Australia.
There is no precedent to be set. If you are behind the wheel of a car you are responsible for it's operation. 'Autopilot' or not. And hard to say it was the autopilot's fault for driving off and returning to the police station two hours later.
Don't believe everything marketers tell you. If you read more carefully, you'll see they say they will only take legal responsibility "while the system is active."
The catch is the Mercedes system will deactivate and hand control (and liability) right back to the human just in the nick of time if it encounters any marginal conditions (stormy weather, presence of emergency vehicles, etc…).
So to the non-gullible who see through the marketing BS, it just looks like an attempt to one-up Tesla with a gimmick.
We shall see whether that was BS or not. The unbreakable windows part, yeah probably a bit of BS, but it doesn't rise nearly to the level of what other car companies put out. I mean… VW…
And Mercedes promised self driving taxis "in three years from now" back in 2017 so… Tesla is not alone is delivering this later than expected.
- The second one has been debunked repeatedly. Elon has predicted self driving dates (wrongly), not promised them. A prediction is not a promise.
Mercedes is two years late on this "promise" too if we're doing that. Sure, Elon has been making his wrong predictions for more years, but now Mercedes is copying him:
So your claim is that Teslas had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver" five years ago, conveniently ignoring the fact that the hardware has changed multiple times since then and even more conveniently ignoring the fact that Tesla still doesn't have a full self-driving product. Sorry, but that's bullshit.
> The second one has been debunked repeatedly
So your claim is that there are 1 million Tesla robotaxis operating on the roads right now. This too is plainly bullshit.
Clearly Tesla's advertising has worked on you. The advertising has worked so well that you're fully bought into the bullshit to the point of irrationality.
It’s not about me, it’s more about overinterpreting statements. News flash: Elon can be wrong, has been wrong, will be wrong. This is not news, even to his fans.
Taxi timeline was a prediction, not a promise. Elon. Can. Be. Wrong.
Hardware is not software. The hardware working depends on software that is not done yet.
Some hardware has been upgraded to be sure. The jury is still out on whether that old claim will pan out, pending future software updates. I doubt it will, but I think it was made in earnestness, not as “BS”.
> News flash: Elon can be wrong, has been wrong, will be wrong.
So you agree: it's bullshit.
> I doubt it will, but I think it was made in earnestness, not as “BS”.
No, it's just bullshit. Musk's strategy is to keep telling bigger lies because there are always more suckers to fleece. His latest bullshit is the Tesla Bot.
Fair enough on the distinction between paid advertising and their own YouTube ad pieces. I don't see how this channel is "BS" as the OP's accusation, but I take your point on it being advertising.
That’s meaningless if the law specifically says the driver is responsible. If you’re behind the wheel you’re responsible for where the car goes. It’s that simple.
If you choose to use some kind of driver assist to get you there that’s a decision you as the driver made and you are responsible for the consequences of that.
First of all, the Tesla Autopilot is not full self driving. The Autopilot is an assistant system combining distance aware cruise control with lane keeping (autosteer) and lane changing. It always requires full driver attention.
The Mercedes system has gotten legally recognized as a level 3 system, which allows the driver to stop paying attention when the system is active and for that Mercedes takes legal responsibility. There is even an european law allowing for that. However, the Mercedes system has one big catch: it can be only activated on selected, mapped roads (interstate equivalents with lane dividers) and only up to 40mph speed.
Tesla FSD (full self driving) is a completely new system for autonomous driving. It is currently in a beta testing phase. Currently it also requires full driver attention and is available only to a subset of the customers as part of the testing. It is however designed to handle any traffic situation on any road.
The precedent the article discusses is that use of autonomous driving may void comprehensive insurance, making the driver fully liable for any compensation. Not mentioned is if this might also affect compulsory 3rd party insurance, which would make this tech effectively illegal in Australia.
Quoting the article:
"David McCarthy – a former journalist and current consultant for automotive manufacturers – told Drive the incident and upcoming court case will be significant for two reasons: It will set a precedent for driver culpability in the eyes of the law, and will set a precedent for insurance companies paying out claims based on 'driver responsibility'.
“The comprehensive insurance on the car is unlikely to pay for the damage if she says she wasn’t in control of the vehicle … Further, to my knowledge there hasn't been a case in Australia involving the use of autonomous technology in an accident and it will be up to the court to determine liability here."
In Melbourne, quite often there are 4 lane roads (2 lanes in each direction) with tramlines in the inner two lanes going in each direction. At the tram stops, passengers need to walk across the outer lane to get to/from the sidewalk, so cars need to stop behind the tram even if they are in the outer lane and not directly behind the tram so that they can make way for passengers.
Having moved here from Sydney a few years ago, where there are no trams in this configuration, this took a lot of getting used to, since if I was in the outer lane my natural instinct was to keep driving as the road ahead seems clear at first.
I'm not sure how common this is globally, but I wonder if Tesla's autopilot is configured for this edge case? I imagine it would be difficult to program since it would detect the tram as simply another vehicle and the road ahead as clear? Then again, I suppose it should have detected the pedestrian in any case?
Tesla autopilot isn’t available in Australia, so probably not. Also, isn’t autopilot just for controlled access highways, unless they were using a beta?
If you have FSD your car (essential enhanced autopilot) will sometimes detect the little stop sign that pops out of the old Melbourne trams. New ones I haven’t seen. But regular autopilot wouldn’t do anything.
I didn’t realize Autopilot disabled the vehicle’s breaks. This poor driver couldn’t stop even after the collision. Seems like a terrible design choice.
Autopilot does not disable vehicle brakes. In fact vehicle brakes immediately disables Autopilot. If they're actually claiming that they had no brakes then that makes it likely they're lying about using Autopilot at all.
The key word here is "blamed" as this has likely nothing to do with Autopilot. The person panicked and fled the scene and then made up stories to pretend they were not the cause of the accident.
Melbourne doesn't generally have what Autopilot would consider "normal" roads, though. The incident involved someone getting off a tram, so the road would have looked something like this:
In the future, perhaps Teslas will handle this well, but you only need to watch a few videos of them encountering similar markings on YouTube to know that this type of road currently results in massive confusion for a Tesla.
Elon just posted even FSD beta doesn’t handle streetcars (trams?) in Canada very well. So it’s probably a big gap in their training set… I guess SF trolleys are quite different or mapped or something.
Yeah, it probably views them as a big truck or something. In any case, I would be really surprised if regular (non-FSD) Autopilot would be happy with the markings on a Melbourne street that has tram lines.
It’s hit and miss for sure. Sometimes it sees the track as the line, or the centre line or the yellow line. The car basically starts steering erratically as it adjusts to what it thinks is the centre.
No excuse to be using autopilot passed a stopped tram… As a normal driver you’d never full speed passed a stopped tram even if it’s doors were closed, as it doesn’t take long for them to open.
A lot of younger foreigners have no idea what to do in an accident and in their home economy can face retribution, extortion, sometimes at the hands of police. In Australia, you stay, you help the victim, you report the accident. That's the law. In theory all drivers should know this but you get a year on your homeland licence before having to sit a local road rules exam.
I have nothing to add regarding self driving. Well I do: I think it's been misnamed and oversold.
reply