During the last half-mile of the trip, the Tesla came to a stop at a red light, only to then drive through the intersection before the light turned green. There were no cars or people nearby, but the incident still gave the writers pause.
“At this point,” they wrote, “we thought the winner was clear.”
If a student taking a driving test did this, do they pass ?
Why should AI not be graded on the same scale ?
Maybe the statement -
“Some guys have no standards when it comes to things they like “
should be what we discuss.
1 Like
Duce630
(DustinK - Damn it feels good to be a Cougar. -Dwight Davis)
42
A new(?) entity in the race, Zoox by Amazon, is seen to be primary rival to
leader Waymo. At least per Forbes. Not so sure of that name choice, Zoox ( pronounced “zooks” ) but I guess they put their best minds on it.
Zoox intends to launch its commercial robot ride service late this year in Las Vegas, with San Francisco, Austin, Miami, Los Angeles and Atlanta to follow. Rather than loading up existing vehicles with sensors and computers like Waymo has
Duce630
(DustinK - Damn it feels good to be a Cougar. -Dwight Davis)
44
Duce630
(DustinK - Damn it feels good to be a Cougar. -Dwight Davis)
45
And here it is - $ 243 million judgement against Tesla on death by FSD in 2019. Needless to say they are planning an appeal.
Once again it appears these AI controlled cars fail to comply with posted road signs.
Just baffled this is still an issue for them 6
years later. They just fail to “see” the signs or
honor them. Frustrating this still happens .
A Florida jury on Friday found Tesla (TSLA.O), opens new tab liable to pay $243 million to victims of a 2019 fatal crash of an Autopilot-equipped Model S, a verdict that could encourage more legal action against Elon Musk’s electric vehicle company.
McGee had reached down to pick up a cellphone he dropped on his car’s floorboard and allegedly received no alerts as he ran a stop sign and stop light before hitting the victims’ SUV.
“This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver - from day one - admitted and accepted responsibility.”
Apples and oranges, in two different cases, with different
rules for guilt. OJ was found responsible in the wrongful death civil suit and
that bankrupted him.
It would be interesting experiment to run all the data thru all the AI “juror models” and see what they say.
Come on, there are plenty of examples of ALL types of juries getting it wrong. It’s so frustrating that you hyper focus on one little thing and not see the bigger picture on things. Are you an engineer or lawyer?
The service has to be rock solid and consistently perform better than a human
under similar environmental conditions. That is the standard we should agree on.The leading self driving company uses robust sensor array packages, as do most others (Aurora,etc). Tesla is the odd man out in deploying the best technology for self driving.
Just judging by the typical Houston driver, it’s 10000% better. Robotaxi won’t cut in line at the end of a back up, stop on the freeway to make an exit from the left lane, make a turn from the right lane to make a left, not use turn signals, be distracted by their phones, tailgate, run red lights, cut off people because you’re making a right turn with a red, drive on a shoulder, use the HOV with just one person. These are things I see done just backing out of my driveway!
No argument from me that many humans exercise poor decision making
actions while driving. One reason I’ve advocated for computer control
before most people even knew what the first AI programming language was called.
The trouble is, with ample evidence out there, is that AI STILL has trouble recognizing and thus following road signs and markings. All of them still
seem to have not mastered that aspect. There is something fundamentally wrong
when AI fails these simple tests.
Google search or just read the postings in this thread.
While AI in self-driving cars has made significant strides, observing traffic signs and lights presents unique challenges:
Variability and ambiguities: Real-world traffic signs can be faded, obscured by weather or vegetation, damaged by vandalism or accidents, or simply vary in design across regions.
Sensor limitations: Different sensors have different strengths and weaknesses. Lidar, for instance, may struggle with distant or less reflective traffic lights.
Adversarial attacks: Research has shown that slight, even imperceptible, alterations to road signs, like adding a few stickers or blobs of paint, can cause a self-driving car’s AI to misinterpret or completely miss them.
Contextual challenges: AI systems need to not only detect traffic signs and lights but also understand their meaning in the context of the current road conditions and traffic flow.
Consequences of these failures
These failures can have severe consequences, including:
Accidents: Misinterpreting a stop sign or a traffic light can lead to collisions with other vehicles, pedestrians, or objects.
Traffic delays and disruptions: A stalled self-driving car due to confusion about traffic signs can block intersections and impede traffic flow, affecting other road users.
Safety risks: Inability to accurately detect and react to signs and signals can jeopardize the safety of passengers, other drivers, and pedestrians.