There’s long been talk in medicine about the need for doctors to listen to patients. Now that advice is being taken literally to diagnose concussions and other hard-to-diagnose brain disorders.
Whether caused by a blow to the head during a football game or an accident, concussions are particularly hard to identify.
A Utah-based startup is easing that process using AI and the patient’s voice to detect telltale shifts in vocal patterns — shifts human ears can’t pick up — to help doctors make the right call.
The company, Canary Speech, is building voice tests that use GPU-accelerated deep learning to pick up the subtle voice tremors, slower speech and gaps between words that may reveal brain injuries, or warn of diseases such as Parkinson’s or Alzheimer’s.
What Happens on the Sidelines
“When a kid gets knocked down on the field and they’re woozy, there’s no practical and objective way to determine if they have a concussion,” said Jeff Adams, co-founder of Canary Speech.
Currently, coaches or trainers look for signs of concussion by checking injured players’ balance, memory and concentration. The standard test asks victims to recall a list of words, recite numbers backwards and answer questions like “What day is it?” or “Where are we playing?” Answers are recorded on sheet of paper.
“Our idea is to use data rather than observation to make an assessment,” said Henry O’Connell, Canary Speech CEO.
Voice Test for Concussion
Canary Speech is one of growing number of companies and universities using patients’ voices to diagnose and predict everything from depression to heart disease to ADHD.
“The goal is to pick up warning signs earlier and treat diseases early enough to make a difference,” Adams said. Before starting Canary Speech with O’Connell, a former National Institutes of Health researcher, Adams led the team that developed technology used for Amazon’s voice-activated Echo speaker.
Later this year, the company plans to roll out a deep learning tool that coaches and trainers can use to diagnose concussions on the sidelines. It’s also planning tests of a concussion product with several NFL teams. Neither, of course, substitutes for a doctor’s diagnoses.
The company trained its neural network to look for patterns indicating concussion by recording people with and without concussions reading a script. It uses NVIDIA Tesla K80 GPU accelerators in the Amazon cloud and a speech recognition tool it developed.
Early Warning for Alzheimer’s
Canary Speech is also tackling Alzheimer’s disease. No test can definitively diagnose Alzheimer’s, but O’Connell said AI could analyze how patients talk to identify warning signs of the disease. For example, Alzheimer’s patients often use fewer and simpler words as the disease progresses.
Diagnosing concussions and Alzheimer’s is just the beginning, Adams said.
“In the future, you’ll give a speech sample along with a blood sample as part of your checkup,” he said. “This is going to change the way we do medicine.”
Image courtesy of Keith Allison via Flickr.
To learn more about how AI computing is changing healthcare and other industries, join us for the GPU Technology Conference, May 8-11 in Silicon Valley. Register early and save.
The post AI Checks Your Head By Listening to What You Said appeared first on The Official NVIDIA Blog.