This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!
NetEase Smart News, December 20th – In early November, a driverless bus and a cargo truck collided in Las Vegas. Fortunately, no one was injured and there was no serious property damage. However, the incident sparked widespread media and public interest, partly because one of the vehicles was fully autonomous. In fact, the bus had been operating without a human driver for an hour before the crash.
This isn’t the first accident involving self-driving vehicles. Uber has had incidents in Arizona and California, Tesla faced problems in Florida, and several other companies have also reported similar events. In most cases, these accidents were caused by human error rather than the autonomous systems themselves.
In this particular case, the driverless bus detected a reversing truck ahead and stopped to wait. However, the human driver of the truck didn't see the bus and continued to reverse. As the truck approached, the bus remained stationary, neither moving forward nor backing up, leading to a collision with the front bumper.
As someone who has studied autonomous systems for over a decade, I find this incident raises many important questions. Why didn’t the bus honk or attempt to avoid the truck? Why did it stop without moving to the safest position? If driverless cars are supposed to make roads safer, what should they do to prevent such accidents?
In my lab, we are working on developing driverless cars and buses. Our goal is to tackle fundamental safety challenges: even if autonomous vehicles perform perfectly, human drivers around them can still make mistakes. This is a key challenge in the development of self-driving technology.
How did the accident happen? There are two main reasons behind collisions involving driverless cars. First, sensors may fail to detect their surroundings. Each type of sensor has limitations—GPS works best when there’s a clear view of the sky, cameras require good lighting, and lidar doesn’t function well in fog. No single sensor is perfect, and combining them effectively is still a work in progress. Also, cost and computing power limit how many sensors can be used.
The second major issue is when autonomous vehicles encounter situations that weren’t programmed into their software. Like human drivers, self-driving systems must make split-second decisions based on new information. When they face unexpected scenarios, they often stop and wait. In the Las Vegas incident, the bus may have waited for the truck to move, but the truck kept approaching. It seems the system wasn’t programmed to honk or reverse, and there wasn’t enough space to do so.
The challenge for engineers is to create a reliable computer model of the vehicle’s surroundings using data from all available sensors. The software then uses this model to interpret the environment and make safe driving decisions. If the perception system is flawed, the car cannot make the right choice. A fatal Tesla accident earlier this year was due to the car’s sensors failing to distinguish between a bright sky and a white truck ahead.
If driverless cars are only expected to reduce collisions, they aren’t sufficient for true safety. They need to act as the “ultimate defensive driver.†When humans make unsafe moves, the car should react quickly. An example is the Uber crash in Tempe, Arizona, in March 2017. A Honda driver turned left without seeing the Uber car, which was speeding through the intersection. A human driver would have slowed down, but the autonomous car wasn’t programmed to do so.
Improving Testing
Incidents like the one in Tampa and the recent crash in Las Vegas show that even when vehicles understand the situation, they may not always make the safest choices. These cars follow traffic rules, but they don’t necessarily prioritize safety above all else. This is largely due to the way most autonomous vehicles are tested.
While basic compliance with traffic laws is essential—such as obeying signals and signs—it's just the starting point. Before autonomous vehicles hit the road, they must be programmed to handle unpredictable behavior from other drivers. Testers should treat other vehicles as potential threats and simulate extreme scenarios. For instance, if a truck is reversing, what should the self-driving car do? Currently, it might change lanes, but in some cases, it could end up waiting, which is not how a human driver would act.
Humans often break rules to avoid a crash, like changing lanes without signaling or pulling over. Autonomous systems need to be trained to do the same in critical moments. Only then can they truly enhance road safety.
Follow the NetEase Smart official account (smartman163) for the latest updates on artificial intelligence and industry trends.
Radiators in this case are manufactured with Stainless steel (SS304, SS316 and SS316L).
These radiators are manufactured with both 1mm CRCA sheet and 1.2 mm CRCA sheet as required and centre distance varying from 600 mm to 4000 mm. Stainless steel radiators can be offered with and without paint.
Through the method, the stainless steel plate type radiator for the transformer is simple in structure, free of complex treatment process on the surface of the radiator, not prone to oxidizing and corroding, long in service life, and high in welding strength.
Stainless Steel Radiator,Stainless Steel Transformer Radiator,Stainless Steel Cooling Radiator,Stainless Steel Weather Proof Radiator
Shenyang Tiantong Electricity Co., Ltd. , https://www.ttradiator.com