by: Daniel Yang, M.D.
 

It’s undeniable that artificial intelligence and machine learning have captured the public imagination in recent years. Powered by an exponential increase in computer chip processing (predicted by Moore’s Law), applications of AI and machine learning are leading to breakthroughs in almost every field. Self-driving cars, voice-powered personal assistants, product recommendations and credit card fraud detection are just a few examples.

It’s no surprise then that these technologies are being applied to the medical field broadly and to medical diagnosis, in particular. In fact, medical diagnosis has long been a target of AI tools. In the 1970s a researcher at Stanford University developed an early AI system called Mycin that attempted to capture the expertise of physicians and automate their decision-making through a computer program that could diagnose infectious diseases and recommend antibiotics. The results of this expert system compared favorably to that of human physicians, but the significant investment of human expertise required, and the narrow applications of expert systems, led eventually to disappointment and disillusionment. 

However, recent advancements in machine learning algorithms that mimic the human brain (described as deep neural networks) are demonstrating impressive progress in automated medical image recognition. Scientific studies published in the last few years have shown that these algorithms can perform as well and sometimes better than human physicians in classifying pictures of skin lesions as cancerous or benign and identifying early damage to the retina from long-standing diabetes. These incredible use cases are driven by the same underlying technology that helps Facebook auto-tag pictures of your friends or allows researchers at Google to spot cats in YouTube videos. 

While we expect to see applications of these image recognition technologies in clinical practice in the near term, there are  many unanswered questions and obstacles that remain.

Some are technical, such as how we apply these algorithms to help make sense of the reams of medical data that are written down in unstructured ways (full of medical abbreviations and mental shortcuts) and siloed in different record systems. 

Other barriers are social and interpersonal. How do these technologies impact the doctor-patient relationship? How would you feel as a patient seeing a doctor type your symptoms into a computer program and having the program spit out recommendations? As we’ve seen in other fields such as predictive policing and credit scores these technologies can unintentionally exacerbate social inequalities and codify bias. On the other hand, they can also narrow social inequalities through improving access and democratizing health care expertise. 

And from a legal perspective, how do we regulate these new technologies, striking the right balance that ensures safety and data privacy without a heavy-handedness that limits innovation? 

These are all questions that we’re exploring at the Moore Foundation as part of our early work in diagnostic excellence. In the next few years, we hope to advance the field of AI-supported medical diagnosis by encouraging experts in health care, academia, civil society and the private sector to work together in tackling these thorny issues. In April of this year, the Moore Foundation is supporting a conference at Stanford University to help lay out a roadmap for where research, investment and implementation of new AI technologies should be directed to advance diagnostic safety and efficiency. We've also funded an evaluation of a promising technology called the Human Diagnosis Project which leverages the collective intelligence of doctors to provide specialist expertise for patients and physicians in under-resourced settings. Their work is being evaluated in public health clinics around San Francisco.

After multiple cycles of hype followed by troughs of disappointment (described as “AI winters”) we're cautiously optimistic that thoughtfully deployed AI tools will augment (not replace) human clinicians in medical diagnosis, functioning as cognitive prosthetics. Stay tuned as we continue to explore this fascinating field!

Daniel Yang, M.D. is a program fellow in the Patient Care Program at the Gordon and Betty Moore Foundation.

 

 

 

Help us spread the word.

If you know someone who is interested in this field or what we are doing at the foundation, pass it along.

Get Involved
 
 

Related Stories