Article

Ethical Prospects for Digital Medicine: A Catholic Appraisal

November 20, 2020
Feature Article
Editor’s Note: A version of this article was presented at CHA’s Theology and Ethics Colloquium in St. Louis, March 11-13, 2020.

I. THE WIZARD OF OZ, OR GOD AND THE SOUL

The story of how Netflix disrupted the movie rental industry is well-known. Brick and mortar stores had a limited selection of films, finite rental periods and punitive late fees. Netflix allowed customers to rent from a much larger library and to keep films as long as they liked, without penalty, for a monthly subscription fee. The early version of Netflix used an algorithm called ‘Cinematic’ to recommend new movies to subscribers. It used viewer ratings to predict future rentals by comparing a user’s rental history with those of other subscribers.

For simplicity’s sake, let’s call this an ‘analog’ model of data analysis. Analog models have three characteristics. First, the model data is derived from the observation of discrete events. In a grocery store, you would examine receipts, whereas Cinematch tracked rentals and ratings. Second, the collected data is public. This means there is an epistemic symmetry between the customer and retailer. Both are equally aware of when the receipt is made, what it records, where it can be accessed, and so on. This means public data tends to be limited in its depth and scope. It is often also a proxy for what we’re trying to represent. In Cinematch, one person’s five-star rating might indicate appreciation of a complex plot, while another five-star rating represents the number of knock-knock jokes in the film. This is why we need qualitative reviews to help us label and understand the meaning of quantitative ratings.

Analog data can lead to interesting generalizations and predictions — e.g., people tend to rent more rom-coms around Valentine’s Day — but it doesn’t answer counterfactual questions, such as, ”Would the contents of your queue be different if you browsed Netflix’s selection on Sunday after church rather than before your Friday night off?“ We can only answer such questions using randomized controlled trials to help us isolate variables, identify causes, and generate explanations, which is the mark of science proper.1 Only then, with an explanation in hand, can we deliberately manipulate variables to affect specific outcomes. This goal is the third mark of analog data analysis.

In 2007, however, Netflix started its streaming service and inaugurated a new frontier of data analysis for the company. In a 2012 blog post, senior Netflix engineer Xavier Amatriain described the change this way:

Streaming has not only changed the way our members interact with the service, but also the type of data available to use in our algorithms. For DVDs, our goal is to help people fill their queue with titles to receive in the mail over the coming days and weeks; selection is distant in time from viewing, people select carefully because exchanging a DVD for another takes more than a day, and we get no feedback during viewing. For streaming members are looking for something great to watch right now; they can sample a few videos before settling on one, they can consume several in one session, and we can observe viewing statistics such as whether a video was watched fully or only partially.2

According to Amatriain, what Netflix now observes and measures is no longer a decision, a discrete event or behavior, but rather the process of deliberation, selection, and consumption — what you hovered over but didn’t choose, what you clicked on a whim, which websites you visited before arriving at Netflix, and so on. Netflix’s new model continuously analyzes live user behavior, generating reams of contextual data with thousands of data points. Let’s call this digital data analysis.

Digital data analysis differs from its analog predecessor in three ways. First, as we’ve indicated, it relies on contemporaneous surveillance of deliberative processes rather than ex post tracking of discrete events. Second, it surveils private processes, meaning that there is an epistemic asymmetry between observers and users, who are usually unaware of the quality, quantity or scope of the data being collected, where and how it is being stored, whether it can be accessed, how it is being used, and so on. Our behavior feels private because we are unaware of the enormous surplus of data we’re generating for unknown persons, processes, and purposes. Third, given tighter correlations between circumstantial data points and human behavior, the goal of the research shifts as well. Its aim is no longer to affect behavioral outcomes or decisions, but to manipulate the deliberative process itself.

Now if you’ll allow me to wax grandly philosophical, what seems to attract us to the digital age is the same desire that drove Augustine to theology: the desire to know “God and the soul” (Soliloquies, I.7). As ubiquitous surveillance increasingly renders everything as data, algorithms increasingly embody Stephen Wolfram’s vision that all complex systems are fundamentally computational and therefore knowable, comprehensible using the right method.3 They herald the end of mystery. As Ed Finn writes, “The concept of universal computation encodes at its heart an intuitive notion of ‘effective’: achievable in a finite number of steps, and reaching some kind of desired result. From the beginning, then, algorithms have encoded a particular kind of abstraction, the abstraction of the desire for an answer.”4 In other words, Netflix’s radical expansion of the ‘calculable’ contains in germ Augustine’s desire to render transparent both the cosmos and the human psyche. As The Atlantic’s Alexis Madrigal writes, if Netflix’s taxonomies have identified over 77,000 genres of film, it has the power not only to “show you things you might like, but [also to] tell you what kinds of things those are. It is, in its own weird way, a tool for introspection.”5

Yet, there are several reasons to doubt these algorithmic promises. An algorithm is by nature a nominalistic mathematical function which simplifies and abstracts from richer, qualitative realities. That’s one source of its power. Yet as Netflix discovered, the intelligibility and usefulness of algorithmic outputs often require qualitative ‘tagging’ by human beings. This in turn encodes contingent cultural scripts into data in order to translate “the mathematical substrate of reality in culturally readable ways.”6 As in the land of Oz, there are Wizards behind our machines, and these Wizards are no more disinterested than objective. We know, with only a little reflection, that Netflix is not an innocent recommendation engine: its goal is to “optimize consumption” within unstated parameters, such as the need for a ‘critical mass’ of available content that fits a genre output, the limitations of the user-interface, and of course, maximizing profitability.7

More importantly for our reflections on digital health, however, is a second algorithmic simplification; namely, how algorithmic implementation elides meaningful differences between our desires and thereby renders our psyches opaque. Reflect on what is signified about your soul by presence of several Shakespearian art films in your queue, and by your hovering over but not selecting the preview of a racy episode of Alien Babes from Mars. As Amatriain admits, Netflix’s algorithms must distinguish between what he calls our “aspirational” and “real” desires.8 If it returns many Shakespearian films you queue but never watch, you’re less likely to engage their service; if it returns many films like Alien Babes from Mars, it’s again likely to decrease your engagement because it violates your self- image. Its algorithms must enable enough self- deception to be attractive while simultaneously outputting recommendations that engross their users because they were explicitly or implicitly desired.

So let us ask the Netflix Wizard: What exactly is an implicit desire? Is it an effective-but- repressed desire uncovered by an algorithm that knows us better than we know ourselves? Or is our implicit desire more like the Augustinian yearning of a restless heart for an immortal Beauty which transcends our concupiscent infatuation with the world’s transient pleasures? Until we can answer those questions — deep philosophical and theological questions, all — the problem of equivocal five-star ratings has not been eliminated by the digital algorithm. Or we could ask: What if reflecting on libidinous ambiguity is essential to living a moral life in a fallen world, and algorithmic clarity displaces the ambiguity?

It is with these questions in mind that we turn to consider the implications of the change from analog to digital medicine for Catholic medical ethics.

II. FROM ANALOG TO DIGITAL MEDICINE

Traditional medicine is analog. It is largely based on the observation and measurement of discreet events during highly artificial settings, such as clinical visits. We occasionally perform continuous monitoring of biological processes, but typically only in intensive care. Traditional medicine also satisfies our description of publicity. Not only is the nature, depth and scope of data gathering limited by diagnostic and therapeutic goals which we share with our physicians, but our consent is explicitly sought and given. Finally, traditional medicine typically focuses on outcomes; the purpose of the clinical encounter is not preventative, but the diagnosis and treatment of disease states.

Today, growing alongside (though not yet displacing) traditional medicine is digital medicine. Unlike the discrete observations of analog medicine, digital medicine is based on continuous, real-time biomonitoring in everyday settings. Furthermore, the nature, depth and scope of the data being surveilled tends to satisfy our characterization of privacy. We are often unaware of the depth or volume of data points being gathered about us, and surveillance aims at generating a behavioral/ biological/genomic and environmental surplus of information.9 It is often unlimited by any single goal or set of goals, let alone goals shared by observers and those being observed.

Moreover, we do not typically know or understand how all of this data is being shared, collated and analyzed to create our various digital personae, nor would many of us find the algorithms intelligible even if we understood that such processes were taking place and their impacts on our lives. Finally, the third characteristic of digital medicine is its potential focus on disease prediction, risk-management, and behavioral optimization over and above diagnostic and treatment outcomes.10

If these characterizations are roughly correct — or at least practically useful — then we are on the cusp of, but have not yet entered into, an era of digital medicine. (Many current applications of AI to health care are in fact supplemental to analog techniques.) Now, is the time to consider how the shift from analog to digital medicine raises new ethical concerns. I believe these concerns can be divided into two categories: those we can articulate and address with relevant ethical principles (those which are novel per accidens), and those for which we do not yet have such principles (those which are novel per se).11

This is not a neat division. To say that a principle is relevant to a new concern does not mean that it has been adequately specified to address those concerns, but only that the types of harms and wrongs under consideration fall into known typologies. For instance, as I’ll illustrate in a moment, we already have nuanced ways of thinking about autonomy sufficient for identifying and articulating many concerns about ethical AI that didn’t exist 20 years ago for the simple reason that there was no AI significant enough to have concerns about. Likewise, we usually know how to divide our concerns about autonomy into those best addressed by technical solutions during the design process, those best addressed by cultural solutions, and those best addressed by juridical and regulatory structures. Indeed, these conversations are already taking place. The Church needs to do a better job of constructively participating in the policy conversations happening around these issues.

As a Catholic philosopher, however, I’m particularly fascinated by the second category of concerns, those for which we lack adequate principles. This type of novelty arises when technologies themselves create genuinely new types of harms or wrongs. The possibility of these genuinely new developments holds the potential to spur entirely new developments in the Catholic moral tradition, just as the advent of industrial capitalism, Jacques Maritain argued, once spurred the Church to discover what we now call Catholic Social Teaching latent in the dignity of human work.12 So after briefly surveying some issues in the first category, I wish to consider several reasons for thinking that digital medicine is capable of generating ethical issues and questions for which our current approaches to medical ethics are not just unprepared, but in principle, inadequate.

Thanks in part to our Principlist friends, the concept of autonomy is fairly well specified. We have multiple ways to drill down into various aspects of human agency, such as differences between capacity and competency, coercion and manipulation, rights and capabilities, formal and material cooperation, and so on. This in turn allows us to identify and articulate key ethical issues as they arise in the course of implementing AI systems in health care. For instance, we know what it means to ask and evaluate whether patients have given informed consent to medical testing and interventions.

We know that genuine consent is informed, competent and free; that acts of deception and withholding information undermine consent; and that unless a patient has access to decent choices, the educated skill to act on those choices, and the support necessary for those choices to be effective, negative freedom is worthless.

So consider biomonitoring again, focusing on what informed consent looks like in the context of fitness sensors and their accompanying apps. In a 2016 report, the Future of Privacy Forum (FPF), found that 30% of health and fitness apps had no privacy policy at all. Those that do are of varying quality so FPF developed a useful document describing best practices for consumer wearables and wellness apps devices — a ten-page checklist one can use to evaluate privacy policies that are often two or three times as long as the checklist, and whose implementation disables the functionality of the app! One concern our current understanding of autonomy attunes us to (among many), then, is the sheer amount of legal verbiage which one must read in order to understand a privacy policy.13 The predictable result, even among those who know enough to care about their privacy, is the seemingly justified rational ignorance on the part of app users who judge that the cost of educating themselves sufficiently to make an informed decision outweighs any potential benefits from that decision.

What is novel here are the new ways in which the potential costs of losing privacy can be significant. In an analog world, the primary costs of losing privacy include exposure (a breach of confidentiality), compromised decision-making, and the loss of self- determination — all costs the informed consent process is designed to minimize. In a digital world, however, informed consent processes often fail to constrain or avert these costs. For example, Fitbit passively tracks personal identifiers, demographic information, commercial information, biometric information, internet and other electronic network activity information, geolocation data, photographic information, professional and employment information, information about your friends, and “inferences drawn from any of the above.”14 This is in addition to the information shared, aggregated and analyzed using third-party cookies from Facebook, Pinterest and Google.15 What are the costs? In 2016, a sliver of similar information published by a competing app (“Stava”) was sufficient for Nathan Ruser, a student at the Australian National University, to identify and map U.S. forward operating bases in Afghanistan, and in 2018 for journalist Alec Muffet to identify CIA black sites in Djibouti.16 Female health trackers like Ovia share menstrual information with employers, “allowing [an employer] to see how many of its employees are pregnant, trying to get pregnant or facing high-risk pregnancies.“17 Given that “many of the country’s largest and most prestigious companies still systematically sideline pregnant women,” the actual, potential, and counterfactual costs of losing reproductive privacy should be a significant cause for concern.18 To these dangers we should add the potential for fraudulent or malicious adversarial attacks on users’ health data for fun or profit.19

In light of plentiful examples like these, we can ask some obvious questions: Is the average person’s consent to digital privacy policies fully informed and competent to judge the risks and benefits of ubiquitous biomonitoring, analysis, sharing, and research? Can it be free when one’s voluntary submission to biosurveillance is incentivized by employers? Given corporate- commercial alliances that share or sell privacy- protected data to train corporate AI models, such as occurred in Project Nightingale (and many other places), and the increasing pressure on health care institutions to share such information in order to have access to the most useful and competitive programs; given the undisclosed use of AI-assisted diagnostic and treatment recommendations by physicians; given the potential for illicit cooperation created by sharing personal data to train programs for malicious, self-interested, or immoral purposes, including the manipulation of clinical decisions in service to institutional quality metric requirements and increasing profits; given the relative ease of de-anonymizing health data — given all of these circumstances and others, are our current practices of providing users with verbiage to read and sign sufficient for protecting and providing privacy, confidentiality, and data security?

Call me skeptical, but it seems quite impossible to inform individuals about the true scope and implications of the information being collected about them, or even to inform the public about the growing scope of biosurveillance, the good-faith efforts of the press notwithstanding. Perhaps we need new techniques of providing adequate information for consent, regulatory oversight over health care app design processes by Institutional AI Boards, new juridical solutions which punish egregious offenders, new institutional policies requiring alliances with preferred and vetted vendors, and regular algorithmic audits, or even new AI that help us tell the difference between good and bad AIs (we could call it ‘Decker’).

Although I don’t have definite answers to any of these questions, that’s not the point. The point is that we already know how to raise questions about autonomy (and beneficence, and justice). The point is to illustrate that the digital revolution in health care is not making ethics obsolete, but just the opposite: the need to develop, specify and apply ethics to digital health care is acute. Unfortunately, the number of Catholic academics pursuing these issues is currently dwarfed by the number of secular and non-academic thinkers working on them. Given its highly-developed moral tradition, Catholic bioethicists could — and should — make major contributions to these conversations.

III. CLINICAL ETHICS IN A DIGITAL AGE

Now let us turn to the second category of novel concerns, those that are novel per se. I suggested earlier that there are good reasons to believe we are not prepared to deal with the >ethical problems presented by digital medicine. Consider the following …

We like to think of disease as having a single cause: one bacteria causing only one kind of infection, or one culprit like cholesterol causing the arteries to harden. This assumption is not wholly mistaken. Many diseases do have a single cause, and those diseases were the low- hanging fruit which fell easily to antibiotics and vaccines. The diseases that remain, however, including heart disease, cancer, stroke and diabetes — all in the top ten causes of death — have complex causes. “They arise from a complicated web of factors,” Thomas Hager explains, “some genetic, some environmental, some general, some personal, that add up to disease in ways that we are still struggling to understand. Because of the complexity of these diseases and the number of unknowns involved, we talk about risk factors — habits and exposures that might shift the chance of our getting a disease one way or another — more than root causes.”20

Treatments for these risk factors precipitated some of the more intractable controversies in 20th century medicine, and will continue to do so. The first medication prescribed for the prevention of disease in asymptomatic persons was Diuril in 1958 (for hypertension), followed by Orinase in 1967 (for pre-diabetes), followed by statins like Mevacor in 1987 (for heart disease). These drugs represented a shifting definition of disease, one no longer based on symptoms or clinically observed signs, but on quanta. As Charles Bardes writes,

Defining disease by a number, such as the blood pressure, the blood glucose, or the blood cholesterol, shifts the focus of medicine from what a patient feels to what a doctor measures. As a consequence, doctors will recommend drugs to many people who feel perfectly well, in essence transforming a person from well to sick. Since the phenomenon occurs across the country and even the world, a huge segment of the population undergoes the transformation from putatively well to putatively sick. The body is essentially sick, and the population is essentially sick. Both need medication. In this “preventive medicine paradox,” organized medicine reduces the overt manifestations of disease by expanding the number of people assigned to disease categories.21

The last decade of debate over the benefits of statins encapsulates this trend.22 Although statin sales are expected to top more than $1 trillion by 2020, they are only marginally beneficial for moderate-risk, primary prevention targets for whom poor diet and a sedentary lifestyle remain the biggest predictors of heart disease. Despite being among the safest and most researched drugs in history and demonstrably beneficial for the secondary prevention of heart disease, many social and scientific studies argue that their massive overprescription is a key contributor to a culture of overdiagnosis, patient anxiety, and consumer hazard, as well as to ever-expanding notions of malpractice and provider liability.

More importantly, to recognize diet and lifestyle as asymptomatic predictors for disease is to acquiesce to the medicalization of daily life, to redefine disease as a normal bodily state we are always already treating. No longer does pathology presuppose teleology (normative functioning); in a culture of effective preventative medicine, pathology is normal functioning, since bodies are something to be managed rather than powers we enjoy. Stated hyperbolically, life becomes a risk factor for death, something appropriately regulated with pills and devices for its entire duration. The Baconian roots of this shift have not been lost on Hippocratic thinkers like Leon Kass and Hans Jonas: New technologies increasingly transform states like fertility, obesity, aging, teenage body anxieties, and childhood energy into disease categories to be medically controlled rather than aspects of the human condition to be navigated through virtuous habits, social support and prudent living.

I suspect these issues and trends are going to be compounded and exacerbated by digital medicine. With the advent of powerful AI, the rate at which we can discover and identify risk factors is hindered only by the scope and granularity of our data sets. As we cover ourselves in biosensors capable of surveilling us from behavior to genome, we can increasingly focus on disease prediction and risk- management through medical management and behavior optimization. I believe these changes will generate ethical concerns that are per se novel, concerns we are presently unprepared to deal with at the theoretical, social, or clinical levels.

Consider, for example, the recent report by McKinsey & Co called, “Insurers Need to Plug Into the Internet of Things — or Risk Falling Behind.”23 The “internet of things” (IoT) refers to a world in which sensors and actuators are networked to computing systems in order to monitor and intervene in each other’s behavior and in the natural world. The IoT excites McKinsey because it makes possible a “whole new world of business models.”24 In the analog world there are contracts that can be verified (e.g., when we visibly exchange money for coffee) and contracts that cannot, the latter being typical of insurance contracts based on informed guesses about, say, one’s driving habits, which insurers couldn’t observe, in order to verify whether one in fact minimizes their liability. In an analog age, insurers had to base their predictions on events and outcomes. What happens when we add surveillance sensors to cars to detect, quantify and analyze qualities like one’s average speed, braking distance, sobriety, or even mood and level of distraction?25 Hal Varian draws the obvious conclusion: “We can observe behavior that was previously unobservable and write contracts on it.”26 Varian argues that digital age contracts can be both contemporaneous and active. In the IoT, insurers monitoring their drivers can directly intervene to nudge good behavior, e.g., by alerting us when to stop drinking (given the expected time we will depart for home, when and how much we’ve eaten and exercised that day, and the quality of last night’s sleep). They can also intervene to punish bad behavior, e.g., by shutting off the car when we are too sleepy, inebriated, or distracted to drive safely. Finally, they can intervene through microtransactions in which our insurance rates respond to our behaviors in real time. In the digital age, McKinsey suggests, insurers will manage risk by not managing uncertainty but by managing individuals.

This third characteristic of digital data analysis we’ve been considering may take several forms in digital medicine. In Abilify MyCite, a sensor-enabled pill, we already have the ability to manage short-tail risks by increasing prescription compliance, a massive problem that costs the U.S. health care system an estimated $100-$289 billion per year.27 FDA approvals of sensor-enabled pills like Abilify MyCite have the potential to significantly improve prescription compliance, lower insurance rates and reduce hospitalizations through patient surveillance and behavior management. Of course, it also has the potential to increase insurance rates for non-compliant patients through punishing microtransactions that price them out of the insurance market, thereby denying them access to health care.

The implications of managing long-tail risks with digital medicine are even more significant. These risks are defined by a significant temporal distance between an exposure to a risk factor and the manifestation of damage, and thus between the action that creates a liability in an insurer or employer, say, and recognition of that liability. A classic example of long-tail risk was the exposure of shipping and construction workers to asbestos during World War II. These exposures induced latent processes whose harm would only manifest decades later, and not only in the workers, but also in genetic harms passed on to their children. We can reasonably forecast that AI-driven predictive analytics will significantly change how we identify and manage long-tail risks. It will mine our behavioral and genetic surplus to identify scores of presently unknown correlations between exposures and disease, thereby enabling individuals to better manage their risks and institutions to limit their liabilities. Combined with virtual RCTs, these correlations could be used to develop lifesaving safety regulations and risk-minimizing best practices, perhaps, eventually, in something approximating real- time. (If life has become a risk factor, it is because the world has become an ‘exposure.’)

On the other hand, such regulations and practices may well become imperatives. McKinsey’s report emphasizes that greater predictive accuracy reduces uncertainty but increases long-tail risk, leading “to a demutualization and a focus on predicting and managing individual risks rather than communities.”28 They suggest that current insurance business models will have changed so greatly in five to ten years that “players’ models will either have to become IoT focused or will decline,” where ‘IoT focus’ means the ever-increasing surveillance, analysis, and manipulation of consumer behavior.29 McKinsey thus recommends “more frequent customer interactions” through sensors and wearables, “enhancing pricing and risk accumulation control” through Varian microtransactions, “driving efficiency through sensor-based automation (e.g., trigger-based claims payments, apps),” the automatic management of user behaviors (e.g., driver- regulating cars), and widespread data-sharing and data partnering to improve analytics and increase monetization by “offering proprietary data and analytics solutions to third parties.”30

We are not prepared for clinical bioethics issues raised by the AI-driven future I just described, for several reasons.

ILL-DEFINED VALUES

First, putting aside its potential to improve patient outcomes, the enthusiasm of digital medicine advocates often rests on new, ill- defined values like ‘humanization’ and ‘personalization.’ That is not to say these are the wrong ideas to invoke, or that they cannot be defined, but they are certainly not as well- defined and operationalized as, say, ‘respect for autonomy.’ Nor am I confident that medicine will tend in the direction of these values. Companies like GE and thought leaders like Eric Topol are arguing that AI will “make healthcare human again” by freeing health care teams from repetitive data-entry in order to “work with their patients more closely,” returning doctors to the bedside, “and with more insight.”31 I am more skeptical. Thomas Hagar tells a story about the day he received an unsolicited letter from an unknown physician offering him a statin prescription because an algorithm had identified him as a primary- prevention candidate. That’s about as far from the bedside as it’s possible to get, and it’s likely going to be typical of risk-management-based medicine in the age of Big Pharma. Or consider the common frustration of clinical ethicists with providers who decline beneficial but risky surgeries for patients on (usually unstated) grounds of malpractice risk. When the power of AI to predict and manage risks dictates micro- transacted insurance costs, I suspect providers will be even less willing to talk to ethicists and patients rather than follow profit-maximizing algorithms that formalize and justify such decisions. And we haven’t even mentioned the lifestyle drugs and surgeries that can be marketed and asymptomatically prescribed due to patient anxieties and lifestyle preferences, let alone potential disease states. If the cultural history of statins, tranquilizers and Viagra provides any clues about what AI-driven health care will look like, it has the real potential to be less ‘humanistic’ rather than more.

ARCHAIC VALUES

Second, there are good reasons to think that our existing ethical values are not fit for the purpose in the digital age. For instance, many ethicists mention the need to balance autonomy and beneficence in order to develop human- centered AI. Yet these values count for too much and for too little when long-tail risks arise from unchosen genomic features of our bodies and environments, or from strong evaluations and basic projects defining our individual and communal identities, such as our religiosity or family engagement. When the ubiquitous surveillance and the medical management of life are taken for granted — when one’s fertility or ethnicity or zip code can be regarded as risk factors, much as we presently treat smoking, refusing to vaccinate, or being an energetic boy in an elementary school classroom as societal risk factors — values like autonomy and beneficence are either trivial or totalizing by turn.

MODERN MORAL PHILOSOPHY

Third, the rise of predictive medicine will split patients into two demographics: older patients with acute or chronic illnesses, and younger patients who desire medical management of their lifestyle, environmental, and genomic risk factors. The fundamental questions posed by the latter category are these: What habits and lifestyle choices are most conducive to living a long, satisfying, and flourishing life? What values am I confident I will affirm in the next two, three, or five decades of my life such that they should guide my decisions in the present? Which habits and skills do I wish to define my life, and what tasks must I perform and sacrifices make to realize them over the course of a lifetime? Post-Enlightenment liberal culture does not consider these to be moral questions, because lifestyle questions are seen as matters of preference. Ethical questions, however, are answered through politics, particularly through the use of a rationalist grammar of rights and consequentialist overrides. This legalistic approach to ethics is fundamentally unable to ask or answer the kinds of questions raised by digital medicine. AI-driven preventative medicine requires a different mode of ethics, namely, an applied virtue ethics of medicine.

Unfortunately, deep divisions in modern culture prevent this. Aside from its emphasis on protecting innocents from harm, Catholic bioethics differs from non-Hippocratic traditions by understanding positive freedom as a developed capacity to discern and choose what perfects us as persons. Yet liberalism has convincingly generated its own nominalist notion of positive freedom as effective freedom for self-determination. In clinical practice thus far, the difference between the two approaches to freedom has been negligible. We respect autonomy because of the proper and legitimate role of prudence to direct one’s life. Although Catholic bioethics does not recognize an autonomous ‘right’ to choose what is evil, as do many of our Principlist friends, we respect patients’ right to choose among beneficial treatments and to forego futile or burdensome care. Since the number of interventions that are per se evil is a fairly short list (and concentrated within a few specialties), there are significant overlaps between Catholic and non- Hippocratic approaches to autonomy.

As clinical practice shifts towards risk management, however, these overlaps will begin to disappear. Clinical ethics will increasingly involve counseling ever younger patients about what kind of person they ought to be. As Alasdair MacIntyre argues, these conversations look very different from the standpoint of the two models of practical reason we distinguished earlier. The expressivist (i.e., nominalist) values coherence and authenticity in her desires, while the Neo-Aristotelian (i.e., essentialist) values truthfulness and effectiveness. As MacIntyre writes, both parties question,

How it is that someone’s desires can be such as to make her or his life go well or go badly, what is involved in resolving conflicts in which either desire is pitted against desire or desire is at odds with rational judgment, and what constitutes a good reason for trying to satisfy this particular desire. … 32

An expressivist answers this question by scrutinizing her desires to discover with which of her culturally-imposed desires and attitudes she identifies and can reflectively endorse.33 Her autobiography is a romantic narrative of discovering and liberating her true affections, one in which moral philosophy and religion figure as “impositions” and “constraints upon [her] thoughts and feelings.”34 For the Neo-Aristotelian, in contrast, moral maturity requires learning to distinguish what one wants to do, be, or have right now from what it is good to do, be, or have over the course of one’s life. Her autobiography is an Augustinian narrative of learning from the failures that arise from inadequately educated judgment and misdirected desires so as to develop the moral and intellectual habits that enable us to flourish as human beings.35 To counsel patients as an expressivist is, therefore, to prompt young people to unmask cultural deceptions and biases in order to discover which attitudes and lifestyles they wish to endorse, whereas to counsel as a Neo-Aristotelian is to educate the young about the final end of human life — which is likely theological in character — and about the various practices whose internal goods are conducive to or constitutive of that end.

The two models of practical reason thus generate contradictory models of virtue, and combined with digital medicine, new opportunities for and kinds of injustice. Consider a brief example. Today, roughly 15% of adolescents have prediabetes and “are expected to lose 15 years from their life expectancy and may experience severe, chronic complications by their forties.”36 In order to “delay or prevent the complications of diabetes,” the ADA recommends “lifestyle management” counseling to (1) “promote and support healthful eating patterns” while (2) maintaining “the pleasure of eating by providing nonjudgmental messages about food choices.”37 Moreover, the ADA argues that (3) because it “can improve outcomes and reduce costs,” this service “should be adequately reimbursed by third-party payers.” These recommendations clearly amalgamate (1) physiological or aspirational goals, (2) individual and cultural preferences, and (3) the financial goals of third parties related to long-tail liability management.

We saw a similar amalgam of aspirational, actual, and third-party financial goals in Netflix’s algorithm. There we were concerned about what Ed Finn calls “corrupt personalization”: that Netflix manipulates user desire and enables self-deception for the sake of profit. We should have similar concerns here. Should prediabetes counseling take an expressivist focus, say, on identifying manipulative marketing and exploitative or oppressive food distribution in minority neighborhoods, or challenging white, bourgeoise bodily norms? How do we communicate Neo-Aristotelian concerns that medicalizing young lives encourages lifelong pharmacological dependence and displaces cardinal virtues with technological conveniences? Who sets the risk thresholds that justify medical or behavior interventions, and what are their interests? How will digital surveillance and AI-enabled microtransactions be used to positively or negatively nudge prediabetic risk factors? For which marginalized and vulnerable subgroups will these nudges become imperatives, and what are the consequences if they cannot be satisfied without significant socioeconomic changes individuals cannot control (e.g., greening the food deserts) or cultural loss (e.g., eliminating Louisiana soul food)? Whose conception of the good life will our digital gatekeepers embody, and which virtues will they encourage, require, or enforce? These questions only scratch the surface of virtue ethics in the age of digital medicine.

CONCLUSION

To the extent the AI-driven growth of predictive analytics expands the field of preventative medicine, and various public and private institutions take more active roles and financial interests in the treatment of these risks, we should expect a growing and acrimonious divide between competing models of virtue ethics and medical management. Clinical ethics will increasingly require explicit discussions of what Charles Taylor calls strong evaluations — not the responses we happen to have towards reality (the focus of rule-based ethics), but the responses we ought to have toward various spheres of life, including what it means to live well, and what it means to be healthy.38

The culture, practices and institutions of modernity are decidedly biased in favor of the expressivist account. There is very little room left to develop a culture of chastity in our public schools, for example, because the widespread acceptance and public subsidy of chemically and surgically managed sex has made Neo-Aristotelian approaches to sexual virtue seem not merely archaic and oppressive, but culturally unintelligible. Similar factors count against the attractiveness of genuinely Catholic answers to liability-driven questions about lifestyle management. Yet, I am hopeful. For I also suspect that the rise of digital medicine and the acute need for a relevant and applicable ethics of virtue will give Catholic bioethics powerful new opportunities not merely to oppose the predations of the culture of death, but to propose a spiritually sound and empirically substantiated vision of well-being.

JOSHUA SCHULZ, PH.D.
Associate Professor of Philosophy
DeSales University
Center Valley, Pa.
joshua.sc[email protected]


ENDNOTES

1. See Gopal Krishnan’s technical blog on the extensive A/B testing Netflix performs on images so as to improve attention-capture in “Selecting the best artwork for videos through A/B testing,” May 3, 2016. https:// netflixtechblog.com/selecting-the-best-artwork-for- videos-through-a-b-testing-f6155c4595f6.

2. Amatrian, Xavier. “Netflix Recommendations: Beyond the 5 Stars (Part 1).” The Netflix Tech Blog, April 6, 2012. http://techblog.netflix.com/2012/04/netflix-recommendations-beyond-5-stars.html.

3. Stephen Wolfram, A New Kind of Science (Champaign, Ill: Wolfram Media, 2002), 5.

4. Ed Finn, What Algorithms Want: Imagination in the Age of Computing (MIT Press, 2018), 25.

5. Alexis Madrigal, “How Netflix Reverse-Engineered Hollywood.” The Atlantic, Jan. 2, 2014. Available https://www.theatlantic.com/technology/ archive/2014/01/how-netflix-reverse-engineered- hollywood/282679/.

6. Finn, 34.

7. Tom Vanderbilt, “The Science Behind the Netflix Algorithms That Decide What You’ll Watch Next.” Wired Magazine, August 2013. Available https://www. wired.com/2013/08/qq-netflix-algorithm/. Also see Amatriain’s more technical comments at “Netflix Recommendations: Beyond the 5 Stars (Part 1).” The Netflix Tech Blog, April 6, 2012. http://techblog.netflix.com/2012/04/netflix-recommendations-beyond-5-stars.html.

8. Ibid.

9. As the 2019 AI Now Report states, “AI health systems … [have expanded] what counts as ‘health data,’ but also the boundaries of what counts as healthcare. The scope and scale of these new ‘algorithmic health infrastructures’ give rise to a number of social, economic, and political concerns’ (53). Available at https:// ainowinstitute.org/AI_Now_2019_Report.pdf.

10. The potential benefits of digital medicine are vast. See Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019).

11. This is not a neat division. To say that a principle is relevant does not mean that it has been adequately specified to address new concerns, but only that the types of harms and wrongs under consideration fall into known typologies.

12. See Jacques Maritain, Man and the State (Chicago: University of Chicago Press, 1998), 104ff.

13. For example, Fitbit’s “Terms of Service” allows users to actively “post, upload, store, share, send, or display photos, images, video, data, text, music, exercise regimens, food logs, recipes, comments, and other information and content”(https://www.fitbit.com/us/ legal/terms-of-service). While the content remains the property of the user, and privacy and sharing settings can be adjusted by drilling down into the app and manipulating individual settings, the default settings grant Fitbit a number of permissions. Among these is a “non-exclusive, transferable, sublicensable, worldwide, royalty-free license to use, copy, modify, publicly display, publicly perform, reproduce, translate, create derivative works from, and distribute Your Content, in whole or in part, including your name and likeness, in any media.” The radically asymmetric understanding between Fitbit and the average users regarding data collection and use generates the feeling of privacy, and so satisfies our earlier description of privacy in our characterization of digital analysis.

14. See Fitbit’s privacy policy: https://www.fitbit.com/us/legal/privacy-policy#how-info-is-shared.

15. See Fitbit’s data-sharing policy: https://www.fitbit.com/us/legal/cookie-policy.

16. Jeremy Hsu, “The Strava Heat Map and the End of Secrets,” Wired Magazine, https://www.wired.com/ story/strava-heat-map-military-bases-fitness-trackers- privacy/; see https://twitter.com/AlecMuffett/ status/957615895899238401.

17. Arwa Mahdawi, “There’s a dark side to women’s health apps: ‘Menstrual surveillance’,” The Guardian, April 13, 2019. https://www.theguardian.com/world/2019/apr/13/theres-a-dark-side-to-womens-health-apps-menstrual-surveillance.

18. Natalie Kitroeff and Jessica Silver-Greenberg, “Pregnancy Discrimination Is Rampant Inside America’s Biggest Companies,” New York Times, Feb. 8, 2019.https://www.nytimes.com/interactive/2018/06/15/ business/pregnancy-discrimination.html.

19. See Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, and Isaac S. Kohane, “Adversarial Attacks on Medical Machine Learning.” Science 363, no. 6433 (March 22, 2019): 1287–89, https://doi.org/10.1126/science.aaw4399.

20. Thomas Hager, Ten Drugs: How Plants, Powders and Pills Have Shaped the History of Medicine (Harry N. Abrams, 2020), Ch. 9, my emphasis.

21. Charles Bardes, “Review of Jeremy Greene’s Prescribing by the Numbers: Drugs and the Definition of Disease,” Bellevue Literary Journal, Spring 2008. Bardes continues: “This expansion is vast. When insulin was the only treatment for diabetes mellitus, the disease was defined by the presence of elevated glucose and pathological signs or symptoms. This was equally true for hypertension and hyperlipidemia before the advent of palatable pills to treat them. Once these pills arrived, both treatment thresholds and disease categories shifted—always in the direction of increasing the number of people for whom the pill is recommended. Now, physicians are urged (and implicitly required) to treat not only symptomatic diabetes but asymptomatic diabetes and even a newly conceived category of pre-diabetes. One sees, lurking on the clinical horizon, a still newer notion of pre-pre-diabetes, as witnessed by the rightful concern over obesity in children. Each new definition increases, by many millions, the number of people who ‘should’ be treated.”

22. For an opinionated introduction, see Christopher Labos, “The Cholesterol Controversy,” Science Based Medicine, Feb. 15, 2019 at https://sciencebasedmedicine.org/ the-cholesterol-controversy/. As Bardes counters, “Might the huge cost of diagnosing and treating people with asymptomatic conditions be better spent on other programs, such as encouraging children to exercise? Does the ease of taking pills divert our attention from the more difficult and possibly more righteous task of improving our dietary and exercise habits? Is it not curious that our diseases of excess consumption, primarily of food and especially unhealthy food, lead us to increase our consumption even more, now in the form of pills?” (Bardes, ibid).

23. Available at https://www.mckinsey.com/industries/financial-services/our-insights/insurers-need-to-plug-into- the-internet-of-things-or-risk-falling-behind.

24. Ibid., 2.

25. Some U.S. States (California) prohibit the use of contextual information in their calculations, allowing only information drawn from the vehicle itself; most do not. For an example of telemetric insurance algorithms at work, see AllState’s Drivewise App, at https://www. allstate.com/drive-wise/drivewise-app.aspx.

26. Hal R. Varian, “Beyond Big Data,” Business Economics, 49 (1) 2013: 30.

27. See the summary in Jimmy B, Jose J., “Patient medication adherence: measures in daily practice.” Oman Med J. 2011; 26 (3):155–159.

28. McKinsey, p. 6

29. Ibid., 9. McKinsey recommends “more frequent customer interactions” through sensors and wearables, “enhancing pricing and risk accumulation control” through Varian microtransactions, “driving efficiency through sensor-based automation (e.g., trigger-based claims payments, apps),” the automatic management of user behaviors (e.g., driver-regulating cars), and widespread data-sharing and data partnering to improve analytics and increase monetization by “offering proprietary data and analytics solutions to third parties” (6-7).

30. Ibid.

31. GE Healthcare, “The AI Effect: How AI is Making Healthcare More Human,” MIT Technology Review Insights, 2019. Available online: https://mittrinsights. s3.amazonaws.com/ai-effect.pdf. See Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (New York: Basic Books, 2019).

32. Alasdair MacIntyre, Ethics in the Conflicts of Modernity

(Cambridge: Cambridge UP, 2016),

33. Insofar as the emotivist recognizes no “authoritative standard, external to and independent of the agent’s feelings, concerns, commitments, and attitudes to which appeal may be made,” writes MacIntyre, it is “only in virtue of his or her endorsement of it that it has whatever authority it has for that particular agent” (23).

34. Ibid, 48.

35. Ibid, 49-59. If St. Augustine’s Confessions is the paradigm of the Neo-Aristotelian narrative, perhaps Flaubert’s Madame Bovary is a paradigm expressivist narrative.

36. Haemer, Matthew A., et al. “Addressing prediabetes in childhood obesity treatment programs: support from research and current practice.” Childhood obesity (Print) vol. 10,4 (2014): 292-303. doi:10.1089/chi.2013.0158

37. American Diabetes Association. Lifestyle management. Sec. 4. In Standards of Medical Care in Diabetes — 2017. Diabetes Care 2017;40(Suppl. 1):S33–S43

38. Charles Taylor, “Disenchantment-Reenchantment,” Dilemmas and Connections (Boston: Belknap Press, 2014): 294-297.


DISCUSSION QUESTIONS

  1. In this essay, Schulz argues that one of the significant changes with regard to data analysis is the shift from analog to digital data analysis. What are the benefits and liabilities of both processes?
  2. The article contends that the practice of medicine will be fundamentally changed by digital data analysis. How would such a change manifest itself in both individual physician-patient encounters, as well as in public health?
  3. Schulz writes that “new technologies increasingly transform states like fertility, obesity, aging, teenage body anxieties, and childhood energy into disease categories to be medically controlled rather than aspects of the human condition to be navigated through virtuous habits, social support and prudent living.” Has this shift in perspective already occurred? Is such a view consistent with Catholic thought?

Authors
  • Joshua Schulz, Ph.D.