By Olivia Wong
With artificial intelligence (AI) undoubtedly on the rise, students and faculty at Jefferson University have begun to discuss the relevant benefits and implications of this tool. In a recent lecture sponsored by the Creativity Core Curriculum, six Jefferson faculty members shared their thoughts on the use of AI in professional settings.
In the past few years, AI has permeated the healthcare field by making predictions, organizing data, and even assisting in surgery. It has been used to identify polyps during colonoscopies as well as serve as a chatbot for patients worried about symptoms at home, bridging the gap between a patient’s prescribed treatment and their continued compliance at home. With more than three million vacancies in healthcare expected in 2026, the dwindling workforce is certainly considering the benefits of AI.
Source by Olivia Wong
The six panelists prepare for the lecture series.
Dr. Dimitrios Papanagnou, an emergency medicine professor at Sidney Kimmel Medical College, emphasized that AI is not a new concept and that there is a dire need to familiarize students who are pursuing healthcare, especially as they transition into the medical field. Computerized knowledge and human interpretation can be married by incorporating computational grounded theory and natural language processing into research and educational settings.
However, professionals are weighing these benefits with one of the biggest issues with AI: it is not human. Can humanistic care truly be achieved through something artificial?
"Can humanistic care truly be achieved through something artificial?"
Although AI serves as a powerful tool, Professor John Dwyer stressed, “people are the agents of change.” It will be difficult to decide who will take the blame if AI makes a mistake with a diagnosis or malfunctions during surgery. Additionally, the ability to speak and listen to a patient must be preserved to achieve humanistic care. Professor Juan Leon explained that although AI has the power to read an X-ray or CT scan, a radiologist must still sign off on the reading. Furthermore, delivering test results and diagnoses with empathy and thoughtfulness is part of humanistic care. Although it is painfully obvious that emotion is a very clear difference between robots and humans, it is still an essential reminder in the face of physician burnout and a complicated healthcare system.
“You are the locus for affect,” said Professor Leon.
Dean Barbara Kimmelman also raised the concern that those receiving the benefits of AI are largely dependent on the powers controlling it. Referencing Elizabeth Tunstall’s “Decolonizing Design,” Professor Dwyer agreed. “AI does not benefit everyone equally,” he stated, emphasizing the fact that the digital divide indicates that almost three billion people in the world still do not have access to technology. Along with this inequity, Dr. Papanagnou highlighted that the data sets that AI relies on may not be completely unbiased. This is dangerous in the medical context because people who are not represented in the data set as frequently as other groups may be negatively impacted by the use of AI. While it is convenient that knowledge is within reach, it will be a challenge to distinguish when and in what context it is to be trusted. “[AI may] widen health disparities unless done in an equitable way,” warned Dr. Papanagnou.
Source by Olivia Wong
Dr. Leon displays an AI-generated image of Renaissance painters and cyborgs to demonstrate exciting randomness.
It is also crucial to discuss issues of plagiarism in the context of a university. Since modes of AI like ChatGPT have been used by students to write essays or copy answers to homework questions, some are nervous about using it at all for fear of being flagged. Additionally, there has been a history of incorrect information being pulled from AI. Professor Kyle Armstrong mentioned the retractions of news articles published on Philadelphia Sheriff Rochelle Bilal’s campaign after news outlets could not verify that the stories were true. This is not just happening in journalism; AI-generated content has had success in infiltrating scientific literature. Published peer-reviewed journals, including those affiliated with the American Association for Cancer Research, have had to correct articles due to manipulated, incorrect images. It is almost frightening to realize that AI has succeeded in flying under the radar of the peer-reviewing process.
“You are the locus for affect...”
In light of all of these concerns, however, Professor Kathryn Gindlesparger wants faculty to prioritize learning AI literacy instead of avoiding it altogether. While integrity and rigor must be preserved, students should not be afraid to engage in AI-provided themes and patterns and instead practice using them as a starting point for their studies. The presence of AI in school and the workplace is a new reality, and resistance to all types of it would not be beneficial to students or faculty.
Just like how a car needs a driver to operate, it was agreed that people are still required to “drive” AI. As AI continues to be incorporated into the lives of students, faculty, and professionals, it will be curious to see how universities and the healthcare field will continue to adapt.
Comments