We’re all familiar with cinema’s portrayal of artificial intelligence (AI) – humans battling against human-like machines.
But despite this commonplace portrayal, where technology mimics all aspects of human consciousness (sometimes even superseding human intellect), contemporary AI, and what can be feasibly developed in the near future, is a far cry from this Hollywood depiction.
Even though the promise of artificial general intelligence creates the hype that drives further development of machine intelligence, right now AI can only replicate limited human tasks.
Think of the last time you had a question and “Googled it”. Google performed a massive search in less than a second, and was likely able to identify the key resources to answer your query.
Machine learning models, currently the most influential AI development approach, can replicate these limited, but exceptionally executed, tasks with unwavering consistency.
Because of these advantages, healthcare is increasingly turning to AI to make patient care more effective, safe, and efficient, but the question remains: Does the reality match the intentions?
AI in healthcare
In 2019, leading medical education researcher Ken Masters, an associate professor in medical informatics at Sultan Qaboos University, Oman, stated that:
“To be a competent doctor, an AI (artificial intelligence) system does not have to be the best doctor in the world. AI [only] has to be better than the worst graduating student in your class … If AI is better than your average student, it is [already] better than 50% of all doctors.”
While the thought of having AI healthcare providers might seem like a futuristic idea, AI is already integrated across the healthcare sector.
In radiology, AI has the potential to standardise protocols across institutions, increase timeliness of reports, and enhance diagnosis.
Wearable biometric sensors can provide outpatient monitoring, and smartphones may be well-placed to help in the fight against COVID-19, with one app proposed to be able to distinguish COVID-19 coughs from other types of coughs.
Given the role of healthcare education in preparing students for their future clinical work, What role does AI have within healthcare education?
The role in education
Back in the 20th century, the author Arthur C Clarke said:
“Any teacher who can be replaced by a machine should be!”
Clarke’s prediction is already reality. YouTube and other digital technologies are being engaged by educators and students alike. It’s now increasingly commonplace for healthcare educators to work alongside technology to deliver world-class education.
Anatomy, a very human science, is often leading the way in these technology-infused learning approaches.
On any given day, a healthcare student might engage with 3D prints, virtual reality or augmented reality to learn the structures of the human body.
Newer AI technologies are also being used, such as chatbots to help students with the traditional “Googled” questions such as: “What nerve supplies this structure?”
These innovations are being deployed across the university sector, from education to administration, with promises of enhanced “learner-centred” curriculum, decreased educator burnout, and personalised learning.
Coming to light, however, is a range of unintended consequences, particularly for students already marginalised by the higher education system.
Our interdisciplinary team, comprising a medical educator, a public health researcher, a medical ethicist and expert in educational technology, drew upon the wider literature to identify five of the tensions between the promises and perils of healthcare education infused with AI tools.
A common feature of all five tensions is the inability for AI to detect novelty and tolerate ambiguity.
Given that uncertainty – and uncertainty tolerance is a key attribute of effective healthcare – there’s a clear need to focus on uncertainty tolerance in healthcare education, leaving human educators with the task of developing and deploying AI tools in a way that balances AI strengths in precision and consistency with their weaknesses in fostering learner uncertainty tolerance.
Our article, Artificial Intelligence and Clinical Anatomical Education: Promises and Perils, highlights tensions for healthcare educators to consider when integrating AI into the classroom.
These tensions stem from the inflexibility typical of AI against the innate variability, uncertainty and ambiguities typical of healthcare practice and healthcare education.
The human form is variable and diverse
AI depends on a set of assumptions that can be relatively static.
Our previous article on the spectrum of gender and sex normally present within the human population illustrates the natural anatomical diversity present in the human population, yet healthcare education technology continues to illustrate binary representations of sex, and primarily only a singular phenotype (for example, fit, healthy, young adults predominately with light skin tones).
How do we build healthcare education AI that enhances opportunities for students to become aware of the diversity and variability of human anatomy?
Healthcare practice is uncertain
Current healthcare curricula are increasingly focusing on merging essential knowledge with required healthcare practice skills, including professional virtues, people skills, and ethical awareness and reasoning.
While AI may be highly effective at presenting healthcare foundational knowledge, it’s less helpful (and potentially detrimental) in the development of healthcare practice skills.
At what point will AI integration tip the balance away from the humanness of healthcare towards the reproducibility and rigidity of AI?
AI focuses on standardisation and relies on biased data sets
Many AI tools are developed from data sets that reflect existing biases in healthcare. There’s increasing evidence that AI doesn’t just perpetuate these inequities, but exacerbates them.
Depending on the way the AI technology is integrated into the classroom, AI tools can perpetuate bias by, for example, sounding like an affable and friendly “white woman”, which may generate a perception in our learners that “serving others” is the role of the female healthcare provider.
How do we effectively integrate AI technology into healthcare education in a manner that is inclusive and unbiased?
Student support can be highly variable and individualistic
While AI can perform repetitive tasks such as grading, address routine questions about the syllabus, or keep track of absences, these seemingly repetitive tasks can sometimes be critical markers for educators to “step in”.
The challenge here is the extent to which we “trust” the AI to make decisions, in lieu of human educators.
What is the right balance between AI-led education and human-led healthcare education?
The way students engage in learning is variable
The limited variables AI tools rely upon can lead to inaccurate reporting of learner engagement. AI depends on students’ interactions with platforms so that data can be collected for analysis.
Many students like to use “offline” approaches to learning, or approaches that may be different to those that the AI is programmed to recognise. This could result in skewed reporting of learner engagement by the AI system.
How will AI, dependent on a limited and standardised set of variables for reporting, be effectively used to evaluate student learning?
Balancing the machine with the human
While these tensions might seem insurmountable, we propose some solutions for integrating AI into higher education in a thoughtful and considered manner.
- Increase transparency about when and how AI is used, and the limitations of its use within the given context.
- Ensure diversity in developer background and expertise so that those who will be using the technology in their teaching or healthcare practice, and those whose decisions are impacted by the AI tool, such as students and patients, are involved.
- Build AI healthcare education tools that embrace and convey the natural (and clinically relevant!) uncertainty, diversity and variability represented in human populations.
- Raise educators’ awareness and expertise of AI through professional development, and support them in knowing when to align, and when to challenge AI recommendations.
- Build a curriculum that has purposefully integrated time without AI to allow students to develop their human skills.
- Develop AI educational tools that support, not replace, the human educator.
We encourage all educational institutions to offer courses and support for educators, and to challenge the “new and shiny” wholesale adoption of AI educational tools to one that is more considered and realistic.
This will ensure all students benefit from the AI educational tools, not just a select group.