(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43487231

Hacker News 上的一篇讨论围绕着一份报告展开,该报告声称 Waymo 的自动驾驶汽车在行驶里程数方面的事故发生率远低于人类驾驶员。人们对 Waymo 依赖预先绘制的地图路线以及大规模部署可能带来的相关故障风险和车队范围内的软件更新的影响表示担忧。一些人认为 Waymo 在受控环境中运行,行驶在更安全的条件下,并避开了具有挑战性的路线,这使得与人类驾驶员进行直接比较变得困难。另一些人指出,人类驾驶员可能会漏报 Waymo 的全面跟踪系统能够捕捉到的轻微碰撞事故。讨论还涉及到,考虑到统计误差,Waymo 的事故率是否真正优于人类驾驶员,以及 Waymo 在事故中承担责任的频率有多高,一些人断言,根据 Waymo 的报告,他们很少承担责任。最后,讨论还考虑了适用于自动驾驶汽车的监管标准与适用于人类驾驶员的监管标准之间的差异。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
After 50 million miles, Waymos crash a lot less than human drivers (understandingai.org)
50 points by rbanffy 30 minutes ago | hide | past | favorite | 23 comments










Waymos choose the routes, right?

The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.

In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.

Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.



I wonder if you can decrease the impact of (2) with a policy of phased rollout for updates. I.E. you never update the whole fleet simultaneously; you update a small percentage first and confirm no significant anomalies are observed before distributing the update more widely.


Ideally you'd selectively enable the updated policy on unoccupied trips on the way to pick someone up, or returning after a drop-off, such that errors (and resultant crashes) can be caught when the car is not occupied.


Does waymo also choose the times of driving, and conditions? Or do they always drive, even at night and in heavy rain?


> The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.

Why is (1) an issue? Route data never gets worse.



Consider London: a series of randomly moving construction sites connected by patches of city.

Waymo, as far as I recall, relies on pretty active route mapping and data sharing -- ie., the cars arent "driving themselves" in the sense of discovering the environment as a self-driving system would.



Route data gets worse all the time

Any time there is a detour, or a construction zone, or a traffic accident, or a road flooded, or whatever else your route data is not just "worse" it is completely wrong



> Route data never gets worse.

Construction? Parade? Giant tire-crunching pothole in the middle of the freeway?



> Using human crash data, Waymo estimated that human drivers on the same roads would get into 78 crashes serious enough to trigger an airbag. By comparison, Waymo’s driverless vehicles only got into 13 airbag crashes. That represents an 83 percent reduction in airbag crashes relative to typical human drivers.

> This is slightly worse than last September, when Waymo estimated an 84 percent reduction in airbag crashes over Waymo’s first 21 million miles.

nitpick: Is it really slightly worse, or is it "effectively unchanged" with such sparse numbers? At a glance, the sentence is misleading even though it might be correct on paper. Could've said: "This improvement holds from last September..."



Of course it's not worse, these numbers have huge error bars. Statistically the two statistics are not significantly different. But trying to explain that to most people with no knowledge of statistics is tough.


I wonder how many of those airbag crashes were an At-fault or shared-fault of the Waymo AVs.


Considering that there's a >1000:1 ratio of regular cars to Waymo AVs - Waymo would have to be EXTREMELY terrible at driving to move the numbers for the other group meaningfully - which would show up in Waymo's own crash data.

There's also historical data. So if you saw a spike in crashes for regular vehicles after Waymo arrives, it would be sus. But there is no such spike. Further evidence Waymo isn't causing problems for non AVs.

Of course anything is possible. But it's unlikely.



I'm confused by your comment. We shouldn't expect that Waymo accidents should budge overall accidents (which seems to be what you are talking about), but it wouldn't be crazy for Waymo, even if it was much safer overall, to be responsible for some non-trivial amount of all the accidents it has.

For example, imagine that Waymo is (somehow) far far far superhuman in it's ability to avoid other cars doing dumb/bad things. It has a dramatic reduction in overall accidents because it magically can completely get rid of accidents where the other driver is at fault. But, in some very specific circumstances, it can't figure out the proper rate to slow down at intersections, and it consistently rear ends vehicles in front of it. This specific situation is very rare, so overall accidents still are low (much lower than human drivers), but, in our made up, constructed (and extremely non-sensical) hypothetical, nearly 100% of Waymo accidents are Waymos fault.

So I don't think it's ridiculous to ask how many of the accidents Waymo has been involved in are the fault of the Waymo vehicle. It turns out that (assuming Waymo's side of the story is to be trusted), almost none of them are their fault, but it didn't have to be that way, even in the case where Waymo accidents were more rare than human accidents.



Assuming you trust Waymo's account, the article details them, saying the following:

>So that’s a total of 34 crashes. I don’t want to make categorical statements about these crashes because in most cases I only have Waymo’s side of the story. But it doesn’t seem like Waymo was at fault in any of them.



OT, but does anyone know what the shape of the curve for the number of automobile crashes per human driver? Is it uniform distribution, where everyone is more or less likely to get into, say, 1.2 fender-benders per lifetime? Or there is a cluster of people who are much more likely to be involved in crashes? I suppose automobile insurance companies would have this type of information.


I would hope so! Are these rides actually cheaper? Assumably this is orders of magnitude less expensive than hiring a human and the driver is what you pay for.

I don't see myself using any of these any time soon, I tend to drive and walk everywhere and don't see much point to paying someone else to drive barring extenuating circumstances. But assuming actual cost benefits are delivered to customers this might be pretty exciting.



This statistic could be misleading, because not all miles are equally dangerous. Google is very careful about selecting where it deploys and tests Waymo, preferring flat, safe, well-designed areas. Routing is also closely monitored and I would imagine that problematic roadways are avoided. The article says they compared it to human accident rates "on the same roads" but doesn't clarify their methodology for "same"ness. It also doesn't factor in driver experience. A taxi driver who has memorized a particular route is likely going to drive safer than a tourist who has never gone on that same road before. Waymo may be safer than the average driver on X road but that doesn't mean it will have the same comparative performance if you drop it onto a random road it has never driven before with no assistance from human support staff.


> preferring flat, safe, well-designed areas.

Like downtown San Francisco?



One of the more interesting things Waymo discovered early in the project is that the actual incidents of vehicle collision were under-counted by about a factor of 3. This is because NHTSA was using accident reports and insurance data for their tracking state, but only 1/3 of collisions were bad enough for either first responders or insurance to get involved; the rest were "Well, that'll buff out and I don't want my rates to go up, so..." fender-taps.

But Waymo vehicles were recording and tracking all the traffic around them, so they ended up out-of-the-starting-gate with more accurate collision numbers by running a panopticon on drivers on the road.



Another point of view is, if CVC violations were enforced against human drivers as robustly as they are against Waymos; and if human drivers were held to the same standards of liability as Waymos; human drivers in California would be way safer too. To me, the overall safety of all the driverless programs should be interpreted as a huge victory for regulators.

That said, why California puts up with the immense danger of Teslas, I don't know. You'd think if they could figure out that Cruise didn't share a video of a crash correctly, they could figure out, after hundreds of such incidents with the state's most fatal-incident-creating-car, that Teslas disengage autopilot moments before collisions and drivers delete video data on purpose. Especially if it were known to me, a nobody.



Another point of view is: does the comparison involve similar vehicles, with similar maintenance, and under similar conditions?


False. Humans are driving in a much wider range range of weather, road conditions, car conditions, passenger conditions, routes, unknown destinations, etc.


Something like a third of traffic fatalities are due to one of the drivers being drunk.






Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com