Our cars are increasingly smart and connected with terrible privacy protections and now we could look forward to an automated sanctions when our speech patterns shows we are driving while intoxicated. It just depends on how the research and the solution built on its basis are put to use. For a teenage driver, if the car decided to pull them over and not start until their speech clarity had become normal that would be useful. Maybe ping the parents to make them aware of their teen's whereabouts while at it.
Even a speech clarity test if starting to drive at night or in adverse weather conditions could have some benefit. Some may not like the idea of their car acting like a nanny but there is still some value it. The risk of automating such things is always that people will find workarounds and the nanny technology will need to stay ahead of the cat and mouse game these things tend to become. Even the abstract of this research gives one pause to the fairness of the AI to non-native English language speakers. Maybe the person sounds garbled and drunk to it even when stone-cold sober.
Devices such as mobile phones and smart speakers could be useful to remotely identify voice alterations associated with alcohol intoxication that could be used to deliver just-in-time interventions, but data to support such approaches for the English language are lacking. In this controlled laboratory study, we compare how well English spectrographic voice features identify alcohol intoxication.
Comments