Waymo reported to the NHTSA that one of its driverless vehicles struck a child near an elementary school in Santa Monica, California, on January 23rd, leading to a NHTSA investigation. The child sustained minor injuries, and the collision occurred during school drop-off hours within two blocks of the school. The Waymo vehicle, operating without a human safety supervisor, was running on its 5th Generation Automated Driving System. The NHTSA will evaluate the vehicle’s caution, behavior in school zones, and Waymo’s post-impact response.
Read the original article here
The child ran across the street from behind a double-parked SUV towards the school, and a Waymo autonomous vehicle struck them, prompting an investigation by the National Highway Traffic Safety Administration (NHTSA). This incident, as described in the official documentation, has triggered a preliminary evaluation to understand what happened.
The Waymo Driver, the AI system controlling the car, immediately detected the child, according to the company. The system applied the brakes, slowing the vehicle from approximately 17 mph to under 6 mph before contact. Waymo asserts that, in the same situation, a human driver would likely have hit the child at a higher speed, possibly 14 mph. This is an important detail, as it highlights a potential safety benefit of the technology, reducing the severity of the impact.
Considering this information, the incident seems like it could have been far worse. The child darted out unexpectedly, and the Waymo’s reaction, specifically its braking capabilities, appear to have mitigated the potential for serious injury, all things considered. It’s easy to see why the initial reaction might be fear, especially given the headlines, but the reality might be more nuanced. The key takeaway is the comparison: a fully attentive human driver might have caused greater harm.
It’s worth noting the role of the environment in this event. The child’s actions, including running into the street from behind a parked SUV, played a role. The presence of these large vehicles, often double-parked near schools, creates a dangerous situation by blocking visibility. It’s a recurring problem near schools, and one that is not unique. Parents are told not to park this way, but they still do.
This brings up a crucial question: is autonomous technology safer than human drivers? This incident, while concerning, highlights how an automated system can respond and react faster and more safely than a human. The article does highlight that the waymo likely hit the kid at a lower speed than a human would have given reaction times. The data suggests that, on the whole, autonomous vehicles may be statistically safer. This means fewer accidents and fewer fatalities.
The fearmongering surrounding autonomous vehicles is often misplaced. The technology has the potential to significantly improve road safety. It’s easy to get caught up in the horror of the situation. The real question is: are autonomous cars safer than human-driven ones? The answer, at least according to the data, is often yes. Automated vehicles are statistically safer.
There’s also a broader point about societal perceptions. If people are overly concerned about the risks of autonomous vehicles, it could force developers to prioritize safety even more. This could lead to vehicles that are incredibly safe, which would be a positive outcome. It’s easy to imagine a future where these technologies significantly reduce traffic accidents and save lives.
The focus should be on how the Waymo performed compared to what a human would have done. In this case, it sounds like the Waymo performed admirably, reacting quickly and mitigating potential damage.
It is easy to get distracted by the sensationalism of the event. Unfortunately the parents of the child are about to get PAID. But instead, consider the broader context of the incident and the potential benefits of autonomous driving.
The investigation by the NHTSA is crucial. We must investigate these events to fully understand the circumstances and how the technology behaved. This will help make the technology safer in the future. We can’t get to fully self-driving vehicles, saving tens of thousands of lives, if every accident is seen as some OMG moment. The goal is to build a legal and regulatory framework to protect us as these technologies are developed and rolled out.
The goal isn’t to eliminate all risk. The goal is to have fewer deaths, while at the same time maintaining the same legal protections that we all have right now.
