Computers are already better than doctors at diagnosing some diseases

Machine-assisted medicine is so good at figuring out what ails you that frail patients need not undergo arduous tests and Alzheimer’s disease no longer comes as a surprise

By Jonathan Kay

To understand the revolutionary innovations in medical technology being unleashed by artificial intelligence, it’s useful to start with the lives — and deaths — of two legendary actors: Paul Newman and Gene Wilder.

They both died, eight years apart, at the age of 83. But while Newman kept up an active public life until shortly before his death, Wilder spent his final years in seclusion. Only later did his family reveal that he had suffered from Alzheimer’s disease.

Wilder was able to keep that fact private, but AI-powered technology developed in Canada now makes it possible to analyze interviews he recorded, from the early 1970s onward, and chart linguistic symptoms associated with cognitive impairment. For instance, as the years went by, he tended to use shorter noun phrases and fewer clauses per sentence. He also swapped out nouns for pronouns with greater frequency.

By contrast, analysis of Newman’s interviews over a similar period does not reveal such a pattern. He died in 2008, with no signs of cognitive impairment.

The comparison of the two screen stars was conducted using software created by WinterLight Labs Inc., a startup based out of Johnson & Johnson’s JLABS incubator in Toronto that brings together experts in speech, dementia, neurology and computer science.

Using machine learning — a subset of AI that allows computers to self-construct decision-making algorithms through the recursive analysis of banked data — WinterLight’s software parses recordings for hundreds of characteristics, including the length of pauses, the types of verbs that are used, irregular frequency and loudness, changes in vowel acoustics, reduced syntactic complexity and instances of repetition.

AI-powered systems could give advance warning of certain illnesses

Each variable, taken in isolation, may have little or nothing to say about a patient’s cognitive state. But when all are processed through a matrix that incorporates a mathematical model of their inter-relationship, the system can alert caregivers to markers for Alzheimer’s or Parkinson’s disease, as well as depression, multiple sclerosis and schizophrenia — sometimes years before a patient exhibits overt symptoms.

 


 

In one 2016 study, co-authored by WinterLight founder Frank Rudzicz, the approach achieved more than 80-per-cent accuracy in distinguishing individuals with Alzheimer’s.

Computer scientist Kathleen Fraser, also a WinterLight founder, has applied similar tools to achieve almost perfect accuracy in detecting patients with primary progressive aphasia, another degenerative neurological disease.

WinterLight has been enlisted in pilot projects at senior-care homes operated by Revera, VHA Home HealthCare and Shannex, as well as in clinical trials conducted by two large pharmaceutical companies. In so doing, its scientists have come to understand that accurate results aren’t enough: The science has to be presented in a way that patients and caregivers find useful.

“People want to know why our software produced a certain result,” says chief executive officer Liam Kaufman. “You have to explain what behaviours were measured. It isn’t enough just to provide the numerical output.”

WinterLight has yet to receive approval from regulators to market its software as a diagnostic medical device — a process that can move much more slowly than the creation of new software technology. But Kaufman and Rudzicz are optimistic.

On one hand, “the results from our tests could help doctors determine the optimum level of staffing at a senior facility,” Kaufman explains.

“Or in other cases,” he adds, “you might see a situation where a son or daughter is told that his or her parent is deteriorating — but they don’t believe it. There’s a sense of denial, and so the patient doesn’t get the best care until the situation really deteriorates.”

How can a computer program defuse this denial? “Because our system is simple, objective and quick, it can be used to provide quantitative data that can be discussed with family members.”

Shyam Ramchandani, vice-president of clinical affairs for Analytics 4 Life, holds the company’s CorVista diagnostic device. CorVista is limited by federal law to investigational use; it is not available for commercial distribution.

 

Computers already outperform medical professionals in some kinds of diagnostic tests

The use of machine-learning technology to assist in medical investigations isn’t new. Computers now outperform dermatologists in scanning lesions for skin cancer. And a Stanford-led group has created an algorithm that trumps cardiologists in detecting heart arrhythmias on the basis of electrocardiograms. But these applications represent an extrapolation of existing consumer-oriented technologies — such as facial recognition and photo classification — that focus on a single diagnostic artifact. WinterLight exemplifies the ongoing expansion of this machine-learning approach to broader and more complex types of inputs.

A product developed by Analytics 4 Life Inc., another startup based out of the JLABS incubator in Toronto, illustrates the same ambitious approach. The company’s CorVista device and software package applies machine-learning techniques and three-dimensional imaging to detect coronary artery disease (CAD) on the basis of skin-surface electrode measurements and other physiological data. The technology is still being tested in clinical studies. But the ultimate goal, says Shyam Ramchandani, vice-president of clinical affairs, is to allow doctors to investigate the presence of CAD without subjecting patients to radiation, injections or exhausting clinical therapies.

“Existing diagnostic techniques in this area typically require the injection of radioactive dyes and other contrast elements into a patient’s bloodstream,” Ramchandani explains. “Then the patient might get on a treadmill to get the heart rate up, and you take images to track the blood flow. Some of the people who need this test can’t do it ‒ they’re not healthy enough — so they have to take drugs that artificially boost their heart rate. The whole process is uncomfortable, and can take many hours. And only a small percentage of these people even need treatment. We’re creating a better alternative.”

Machine-learning diagnostics could sometimes be the kinder, gentler option

In pediatric medicine, in particular, the less invasive and less arduous diagnostic strategies facilitated by machine learning will offer significant improvements in the way patients receive care.

During a presentation at the Elevate Toronto innovation conference this past summer, Anna Goldenberg, an assistant professor of computer science in the University of Toronto’s computational biology group, described the rigorous oncological monitoring regime required by sufferers of Li-Fraumeni syndrome, a hereditary condition that compromises the body’s ability to suppress tumours. This regime includes regular full-body MRI scans, which are stressful for adults, and sometimes almost impossible for young children, who cannot lie still for the duration of the test.

Responding to this challenge, Goldenberg and her colleagues used machine-learning software to identify those Li-Fraumeni patients most likely to be diagnosed with cancer before age 6.

As in all areas of AI, the algorithms are only as good as the numbers that inform them. And so, Goldenberg is always on the hunt for bigger and more demographically diverse data sets. But already, her non-invasive AI-powered surveillance model is approaching the accuracy level of traditional diagnostic methods, all without the associated cost and trauma.

 


 

Goldenberg’s presentation was titled Will Dr. Robot Ever See You? It’s an apt question: Like all modern workers, health providers and researchers are grappling with how much of their professional role in our society will migrate from human agency to computer algorithm. While doctors once were assumed to be largely protected from the trend toward automation, that is changing: As machine-learning technology is used to automate the search for symptoms of diabetic retinopathy in eye scans, for instance, we may one day need fewer ophthalmologists.

Some of the new technologies also may change the relationship between patients and caregivers — sometimes in unsettling ways.

For example, WinterLight’s software may lead some patients to fear that every word they utter will be scrutinized for evidence of mental deterioration. Consider Gene Wilder’s decision not to go public with his cognitive deterioration in his final years. Obviously, his privacy was precious to him during that period — and it might well have unnerved him to know that even stray utterances can be used to discover his secret.

“I wonder about the dynamic between the nurse and the patient,” says Samir Sinha, director of geriatrics at Mount Sinai Hospital in Toronto. “When people are losing their memory, they get paranoid, they get anxious. They get upset and depressed — because they know that this can be used as evidence to take away their freedoms. And if you start co-opting human communication, a fundamental way that people get pleasure and companionship, they might just keep their mouths shut.”

 


 

Liam Kaufman, WinterLight’s CEO, has thought about such issues. “White coat syndrome is a real problem, and not just with cognitive assessment,” he acknowledges. “Just going to the hospital, it turns out, can make your blood rate go up.”

But in his view, much of the stress arises from the irregular, high-stakes nature of medical visits. “Right now, someone may go to the hospital once per year — and they stress out and clam up. Our theory is that, by doing your assessment in the comfort of your own home, and by doing it frequently — every few months — it becomes a habit. And the stress actually goes down.”

Because this technology is new, both patient-response theories are untested. But these issues will need to be explored, as AI-enabled tools take a more prominent place in our health systems.

In geriatrics, as in most areas of medicine, the future likely will take the form of a partnership between the old and the new, humans and machines — with doctors informing their judgment on the basis of improved diagnostic analysis, without alienating patients in the process.