Tesla Can't Perfect Autopilot Without a Few Deadly Crashes
This article by Zachary Mider for Bloomberg may be of interest to subscribers. Here is a section:
Key to her argument is an insight about how cars learn. We’re accustomed to thinking of code as a series of instructions written by a human programmer. That’s how most computers work, but not the ones that Tesla and other driverless-car developers are using. Recognizing a bicycle and then anticipating which way it’s going to go is just too complicated to boil down to a series of instructions. Instead, programmers use machine learning to train their software. They might show it thousands of photographs of different bikes, from various angles and in many contexts. They might also show it some motorcycles or unicycles, so it learns the difference. Over time, the machine works out its own rules for interpreting what it sees.
The more experiences they have, the smarter these machines get. That’s part of the problem, Kalra argues, with keeping autonomous cars in a lab until they’re perfect. If we really wanted to maximize total lives saved, she says, we might even put autonomous cars on the road while they’re still more dangerous than humans, to speed up their education.
Even if we build a perfect driverless car, how will we know it? The only way to be certain would be to put it on the road. But since fatal accidents are statistically rare—in the U.S., about one for every 86 million miles traveled—the amount of necessary testing would be mind-boggling. In another Rand paper, Kalra estimates an autonomous car would have to travel 275 million failure-free miles to prove itself no more deadly than a human driver, a distance that would take 100 test cars more than 12 years of nonstop driving to cover.
At a Berkshire Hathaway annual meeting a few years ago Warren Buffett stated the relentless upward trajectory of insurance premiums was partly driven by texting while driving. As the pungent aroma of cannabis wafts its way through the streets of California, and other US states, I would opine driving while high is another hazard that is likely to further increase actuarial calculations of risk.
That’s not great news for the automotive business because insurance rates are already at levels that pressure family budgets. For example, a RAV4 costs about $300 a month to lease but $179 to insure. Paying almost 60% of the lease rate for the car is economically insane but it is the reality of car ownership.
Therefore, the true promise of autonomous vehicles is to change the arithmetic. People are dangerous and we are liable to be distracted. On top of that there is a bull market in licentiousness which shows no sign of slowing down. That further increases risk. Outsourcing responsibility for driving to an artificial intelligence is a powerfully attractive sales pitch when people start to think about how much wasted time they spend in their cars. That promise is one of the most abiding stories of this bull market. Like all manias the promises are eventually delivered upon but we need to monitor the price action for evidence of an imbalance between supply and demand that suggests people are willing to put their money where Elon Musk’s mouth is.
Tesla continues to firm from the $200 but will need to sustain a move above the trend mean to confirm a return to demand dominance.
Alphabet/Google has been ranging for nearly two years and is currently testing the upper boundary. It needs to hold the $1000 level if the medium-term bullish environment is to continue to be given the benefit of the doubt.
The S&P500 Insurance Index is back testing the region of the trend mean. It has been a major beneficiary of the bull market in bonds and the risk of a further slowdown is potentially supportive for bond portfolios but negative for demand for its products. The Index needs to hold the 370 area if the decade-long bull market is to continue to be given the benefit of the doubt.