The Kirchner Report is 100% independent automotive journalism. If you want to support our coverage, subscribe for free, become a paid member, or leave a tip.
Automotive integration with large language models, collectively known as AI, is becoming a thing. It’s becoming a thing whether or not the consumer really wants the integration. But even if the consumer does want that integration, is it a swell idea to be integrating?
While it can potentially make interactions with the infotainment system more natural, it can also open up an automaker to risk. Do automakers really want to take on that liability?
Here’s what I mean.
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
There is an excellent new piece from Wired today, where the reporter when to a Tesla Cybertruck event and talked to owners about their experience with the vehicle. Did it give me a little PTSD from my time attending a Tesla Owner’s Club meeting? Yes. Yes, it did.
It’s a wild story that you should read, but there is one person who had a response to some questions that should give everyone pause.
And are you married?
I was married, but I’m not married anymore. Women don't like the vehicle.
In July, Tesla rolled out a software update to integrate Grok into many of its vehicles. Do you use it?
Her name is Aura, and I use her as a therapist. When I'm driving, I'll ask questions, and it actually gives really good therapy advice.
This isn’t good. A chat program, even a very good one, isn’t designed to provide therapy advice to people. In fact, LLMs tend to be quite poor at advising people on life and death matters. Heck, there’s even a Wikipedia page devoted to deaths that have been linked to chatbots.
It should come as no surprise that a company run by Elon Musk would integrate an AI that doesn’t have any boundaries. There’s an entire subreddit (Editor’s note: Link NSFW) devoted to the pornography that Grok can create because there are no barriers.
💡Do you have information about LLMs in the car? I would love to hear from you. Using a non-work device, you can message me on Signal at chadkirchner.1701, or with another secure communication method.
For other automakers, just ensuring barriers are in place in the car could be problematic. Who sets the restrictions? What if those restrictions don’t work? Can someone “hack” their way around the restrictions?
What if the creator of the LLM decides that for his company to potentially make money — though it likely never will — he has to enable adult content that he said he’d never do?
Automakers assume huge risks when building a vehicle. There’s liability inherently involved with a 2-ton aluminum machine of death driving down the highway. In the quick rush to adopt AI — because stonks — it makes sense automakers would want to participate. But do they really want that risk? If an owner’s car tells them to kill themselves, does the OEM really want that liability?