Amazon has reportedly filed a patent that will allow its Artificially Intelligent Virtual Assistant, Alexa, “to notice a user’s illness by detecting a change in their voice.” The most obvious use for this is, of course, advertising. The Next Web speculates, “For instance, if the user has a sore throat, it [Alexa] might play a cough syrup ad or suggest a restaurant to order chicken soup from.”
According to published reports, Alexa could help not only with coughs and colds but also with emotional and mental well-being and even simple boredom.
And this is all coming sooner than we might think. Matt Hancock, the United Kingdom’s Health Secretary, announced in a July speech that the National Health Service (NHS) is working with Amazon to provide accurate and reliable answers to health questions?—?“Alexa, my back hurts, what should I do?” However, as the patent filing indicates, Amazon is looking toward proactively diagnosing illnesses, not simply offering advice when asked.
And, of course, this won’t be unique to Alexa for long. You know that Siri, Cortana, Bixby, and all the rest will soon be making similar offerings. In fact, the New York Times reports that tech start-up Kinsa has been selling data from its internet-connected thermometers to Clorox.
The data showed Clorox which ZIP codes around the country had increases in fevers. The company then directed more ads to those areas, assuming that households there may be in the market for products like its disinfecting wipes.
Welcome to the Internet of Things (IoT). We see that several people in your area are running a fever. Wouldn’t you like some disinfecting wipes?
The immediate push-back falls under the heading of “growing concerns about privacy.” Assurances are made that all of the data is aggregated and anonymous, but we know that assurances and reality do not always match up. Privacy concerns are valid.
To me, however, all this AI diagnosis of and (suggestions for) treatment raises significant questions regarding the doctor-patient relationship. Is it a good thing that AI is being added into the mix? How much should we allow AI to do? What do we gain with AI? What do we lose?
To put it another way, is there something uniquely human about the physician-patient relationship that is irreducible?
While it’s true that there is in all of this an element of trying to harness or leverage the latest technological developments in order to improve health and healthcare, we have also been going through a large cultural shift with respect to medicine. We are moving from?—?and perhaps have almost fully finished moving from?—?a covenantal view of medicine to a contractual or transactional view of medicine.
Under the covenantal view, the physician, other medical providers at all levels, patients, and patient families view themselves in an ongoing relationship of both giving and receiving, of mutual respect, of seeking understanding, and of looking at the larger place of the practice of medicine in the overall scheme of life and community.
These days, however, society at large tends to take a more contractual or transactional view?—?I’m arranging for (contracting with) you to help me get well. Or worse, take my money and make me exactly the way I want!
My suggestion is that this shift from a covenantal to a contractual to a transactional view of medicine is unhelpful and unhealthy all on its own. Adding AI into the mis will not help things at all. And yet, perhaps the prospect of AI in medicine will give us pause to think about both AI and our view of medicine.
Can we, first of all, reclaim a covenantal view of medicine? I certainly hope we can!
Second, is it possible to view AI as an aid to the covenantal view of medicine? Might AI help free the doctor up to concentrate on the relational aspects of medicine? By this I do not mean simply the ability to examine the patient’s body, posture, and general demeanor. Certainly, AI that is equipped not just with microphones but also with cameras and other sensors can take into account these physical elements. (See, for example, “The Walabot Home is like a smoke detector for senior falls.”)
Rather, I mean that the medical provider would be not only able but would indeed be eager to practice the virtue of presence with his or her patients. The virtue of presence recognizes and responds to the importance of “being there for and with the other . . . being one’s self for someone else . . . refusing the temptation to withdraw mentally and emotionally; [and] on occasion putting our own body’s weight and warmth alongside the neighbor, the friend, [the patient,] the lover in need.”
In what ways might AI help such an encounter? In what ways might it hider?—?or even threaten to replace?—?such an encounter? These are the kinds of questions we need to consider both on a societal level as well as on an individual level. How much of my medical care am I willing to turn over to AI? What can I do to maintain that face-to-face, body-to-body interaction with my physician and other healthcare providers?
The time is coming and coming soon. Now is the time to consider and make decisions about just these things.
Image: Piyush maru [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]