The Hidden Hazards of Smart Device Medical Advice
August 30, 2021566 views0 comments
By Boris Babic
Diagnostic mobile medical apps call for increased regulatory intervention, even if they do not dispense advice or treatment.
For many of us, our electronic device can be a communications lifeline, entertainment system and professional networking hub. If trends continue, it may become our health advisor as well.
Direct-to-consumer (DTC) medical apps are a growing segment of the USD10 billion market for healthcare solutions, incorporating machine learning (ML) and artificial intelligence (AI). Most are designed to flag symptoms that may require attention from a healthcare professional. For instance, the Apple Watch’s heartbeat sensor periodically checks for irregular rhythms associated with atrial fibrillation (AFib), a disorder that can cause strokes and hospitalisation.
Despite their increasing accessibility to consumers, these apps have yet to generate much interest from regulators. At first glance, this may seem sensible. The apps do not claim to dispense advice or treatment, but rather notifications of possible early warning signs.
It is short-sighted, however, to let DTC medical apps slip under the regulatory radar. As we describe in a recent article for Nature, they could turn out to have costs which insurers or taxpayers might ultimately be responsible for.
From a standard medical regulatory perspective, DTC medical apps are particularly advantageous due to their ability to cheaply reduce the risk of false negative medical judgments – i.e. the number of people who unknowingly carry illnesses requiring treatment. But from the standpoint of safeguarding healthcare infrastructure, false positives – the number of people who unnecessarily seek treatment – are also a problem to be reckoned with. The manifold benefits of identifying disease in the early stages, when it can be easily treated, should be measured against the costs incurred by skittish patients booking needless clinical appointments on the advice of their smartphone or other device.
Decision theory suggests that the risk of false positives is far from negligible here. For example, a famous 1998 study found that patients believed positive diagnostic test results were much more indicative of disease than they actually are, often ignoring the associated base rate in the population. The flexibility and ease-of-use of DTC medical apps further heightens the probability of false positives.
Consider an app that purports to scan one’s skin lesions for signs of cancer based on photos taken with a smartphone camera. Without a limit on the number of times a single lesion can be checked, there is a greater likelihood that one of the images will be flagged as requiring medical attention. When that occurs, people are likely to anchor on the one positive result. A 2010 study on genetic risk information revealed that people grossly overestimate their risk of contracting a severe illness such as oesophageal cancer once they learn they are susceptible to it.
Moreover, DTC medical apps are often marketed to a generally young and healthy demographic and are targeted at relatively rare diseases such as AFib. This is an ideal combination for generating false positives.
What regulators can and should do
To prevent the potentially significant costs that could attach to the false positive judgments caused by large scale use of DTC medical apps, regulators should intervene early.
We identify three specific ways they could take action. First, they should encourage developers to perform behavioural research on how consumers respond to DTC medical apps in the real world. While medical device developers already work hard on improving the sensitivity and specificity of their diagnostic systems, without clinical trials or field research we cannot sufficiently understand how such technology will fare in the hands of imperfectly rational users.
Second, regulators could mitigate the cost of false positive verdicts by requiring that positive predictions be verified through a virtual appointment with a healthcare professional. Developers could be further required to bear a portion of the consultation costs. Such a requirement could be tied in to experimental government initiatives such as Singapore’s recent tele-medicine regulatory sandbox.
Third, regulators could give doctors the right to “prescribe” mobile medical apps to patients who may be at higher risk, thus keeping these apps out of the hands of the general public. In the case of AFib, the app could be activated only for patients of a certain age, or with a family history of the disorder. Something like this already exists in Germany, where healthcare costs incurred by certain medical apps are not covered unless, among other things, a doctor or insurer has prescribed their use. Our recommendation would be to rely on doctors’ judgements rather than insurers because medical professionals are best equipped to adjust the availability of the app in accordance with existing risks, which can significantly reduce the rate of false positive judgments.
In sum, we aim to highlight that absent regulatory intervention, free or cheap diagnostic medical information can generate significant social costs, which have been underappreciated by policy makers. As always, there is no such thing as a free lunch.
Boris Babic is an INSEAD Assistant Professor of Decision Sciences.
Sara Gerke is Research Fellow, Medicine, Artificial Intelligence, and Law at The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and The Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL).
Theodoros Evgeniou is a Professor of Decision Sciences and Technology Management at INSEAD. He has been working on machine learning and AI for almost 25 years.
I. Glenn Cohen is the Faculty Director, Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics at Harvard Law School where he is also the James A. Attwood and Leslie Williams Professor of Law and Deputy Dean.