Ethicists have critical role to play as artificial intelligence use increases, says webinar presenter

January 2025
 

The use of artificial intelligence is increasing exponentially in the health care sector, and ministry systems and facilities are among the countless providers that are deploying the technology.

Ethicists have an essential role to play in guiding decision-making around which artificial intelligence tools ministry organizations should use and in what way and for what purposes, says Michael McCarthy, associate professor and graduate program director at the Neiswanger Institute for Bioethics at Loyola University Chicago.

McCarthy

McCarthy shared his thoughts in a webinar in December that was part of an ongoing CHA series called Emerging Topics in Catholic Health Care Ethics.

When it comes to the deployment of AI, McCarthy said, "It's not enough to say, 'Is the technology useful?' … We're also saying, 'What are we using and why, and how does this enhance the patient experience?'"

McCarthy said that when implementing new technology, "figuring out what your values and goals are is really important."

Tools for health care
McCarthy began the webinar by explaining what artificial intelligence is and sharing some examples of how different types of AI are used in health care.

He said with traditional AI, people program a digital system to do algorithmic tasks and train that system to improve over time as it processes increasing amounts of data. Some examples are the use of AI to read scans and other imaging results to give clinicians insights about patients' medical conditions.

He said generative AI involves using machine learning to get digital systems to create new content based on patterns in data. For instance, health care researchers may use AI to analyze large amounts of deidentified patient information to forecast likely outcomes of clinical trials. Or on an individual patient level, a clinician may run a program during an exam that generates clinical notes about what the patient and clinician are saying. This type is ambient generative AI.

McCarthy explained the most advanced form of AI used today in health care is agentic AI, which is interactive, autonomous and adaptable technology that is directed to a particular goal. An example is online chatbots that use AI to receive and process questions and provide responses in real time. McCarthy says a potential future use is robots that use agentic AI to guide them in providing companionship, mobility help or transportation for elders.

He said the use of AI and related technologies has brought numerous advancements but also has invited many questions and concerns. He said with a laugh that among the questions is "whether any of these technologies will take over the world."

Return on investment
McCarthy said health care systems and facilities and technology companies are investing billions of dollars into AI, but it is uncertain what everyone's expectations are or what the return on investment will be. He said much of the research that has been done about AI in health care so far has focused on whether the technology was doing what it was programmed to do, not on what the patient outcomes were, nor on what other benefits the technology afforded.

He said it is important that health care providers have a thorough grasp of the technologies they are considering implementing, why they are implementing the technologies, what the potential risks and rewards are, and what goods are being achieved.

McCarthy said ethicists should be at the forefront of discussions of these concepts in the ministry. He referenced Ron Hamel, a retired CHA ethicist, who has said that questions of identity and integrity must be considered in everything the ministry undertakes — ministry systems and facilities must ask what it means to be Catholic health care providers and how they should act given the answers to that question. It is a matter of character and behavior, McCarthy explained.

McCarthy advised that ethicists prioritize bringing all key stakeholders in technology decisions to the table to discuss what the goals of AI use are and what values are at play. Guided by ethicists, stakeholders should think through who is responsible for the outcomes of technology decisions, who needs to know what about those decisions, and how trustworthy the decisions are. Some questions they may consider related to AI include: What do patients need to know and what consent is needed? What are the patient needs that are being addressed with the technology? How will health care staff be impacted by the use of the AI tools? How might underserved people benefit from or be harmed by the technology? And is information being used in an ethical way?

McCarthy said when it comes to technology "it's not about whether on its face it is good or bad but about how we think about it and use it."

He said just because a technology exists does not mean it should be used. Only if its use aligns with the organization's Catholic values and mission should it be deployed.

"It takes intentional effort to determine this," he said. "It's about how we think of the values of Catholic health care and how we move forward based on those values."

 


Trinity Health ethicist says artificial intelligence continues to hold much promise if used responsibly

By LISA EISENHAUER and JULIE MINDA

Artificial intelligence technology has been in wide use in health care for many years, and as its application expands greatly, ethicists at ministry systems and facilities are remaining heavily involved in decision-making around what technologies are used and in what ways.

Sanders

At Trinity Health, Alan Sanders, vice president of ethics integration and strategy, says the system uses its ethics discernment process to vet each new technology and its usage. He says that with the proliferation of new AI technologies and applications, ethicists at Trinity Health help guide decisions around whether and how to deploy them.

He said that as long as new technologies are evaluated and deemed to be effective for improving patient outcomes, and as long as they can be implemented in a controlled environment, with clear goals and set up scientifically, and as long as they are aligned with ministry values, they hold potential to do a lot of good. "You have to be very targeted and strategic to make it work," he said, adding that it is about "how can we, the health system, do it in a way that's always as best as possible (to) enhance patient care."

Evolving technology
Sanders said like many other health systems, Trinity Health long has used a form of artificial intelligence called machine learning for functions like reading clinical test results, radiology images and scans and for administrative-type tasks like coding and billing. But the use of newer artificial tools like generative AI such as in ChatGPT is expanding the menu of options for AI use significantly.

Among the many technologies coming to the fore now are tools for augmenting clinical decision-making, such as ambient listening for clinical documentation, improving revenue cycle and creating chart summaries for clinicians. There also are evolving uses for AI for employing chatbots to research information, either for internal use among health care staff or for patient-facing systems that field incoming questions.

Sanders said Trinity Health has been using its existing discernment process to evaluate the conveyor belt of new technology available to the health system, to determine what makes sense to implement and how, based on Catholic health ministry values.

Discernment
Some of the types of questions he said he and other leaders across Trinity Health use when helping leaders discern the use of new technology include: What is the purpose of the technology? How would the implementation be done? How is patient privacy being guarded? How is patient care quality impacted? What are the biases of the program and how are the biases addressed? How is data being used? What are the anticipated outcomes and how will those be assessed? How are staff impacted, including whether their jobs could be made obsolete? How will the technology being implemented be adjusted if fixes are needed? And what are the legal liabilities of using the systems?

Sanders noted that Trinity Health is establishing a governance process having to do with the use of AI at the system.

He said one area of particular focus and concern among Trinity Health ethicists is informed consent. He said it's essential that transparency remains a top priority in all technology use. The system must determine in advance from an ethical perspective what patients need to be informed about when it comes to the use of AI, how to inform them, and how to secure their consent when it's needed.

He said that the use of big data systems has involved similar questions and deliberations.

Standards needed
Sanders said formal guidelines for AI usage are needed in health care, and organizations like the National Institute of Standards and Technology have been releasing principles for AI that can be instructive.

He said the Vatican also has been issuing information on how it is looking at AI and the ethical questions that must be addressed as AI use increases. Last the spring, speaking at an annual gathering of scientists and experts organized by the Vatican's Dicastery for Education and Culture, Pope Francis said it is critical that artificial intelligence be used responsibly, and that human dignity remain the top priority in any technology deployment. Sanders and other Trinity Health ethicists have been keeping abreast of such Vatican statements to inform their own work.

Sanders is cautiously optimistic about the promise of new technology. He said when it comes to artificial technology advances, as with all other technology, "It has its risk, but it certainly has its benefits as well, potentially."

 

Copyright © 2025 by the Catholic Health Association of the United States

For reprint permission, please contact [email protected].