Opinions on the value of AI in medicine and its possible influences on medical practice

In this study, we examined the perceptions of future physicians about the possible influence of AI on medicine. In general, they were favourable and had high hopes for AI in medicine. They saw AI as an assistive tool that may improve doctors’ access to information, help physicians to make more accurate clinical judgments, minimize medical mistakes, and improve patients’ access to healthcare. Similarly, two-thirds of the students in a comparable study stated that AI developments would make medicine more exciting [16], and two-thirds in another study had positive attitudes toward the clinical use of the AI [18]. Regarding the PAIM scale scores, students’ perceptions were more positive in the ‘Knowledge and Trust’ subdomain than in the “Disadvantages and Risks” and “Informed Self Control” subdomains, which could be interpreted as being captivated by the excitement of a new promising technology while having substantial concerns at the same time. This is consistent with earlier research; concerns raised about AI in healthcare include confidentiality and privacy, patient safety, the impact on the profession’s humanistic components, and the rise of the commercialized medicine [2, 14, 17, 25,26,27]. The results varied according to gender, but not according to students’ year of study. We regarded the gender difference as indicative of men’s possible interest in technology. The absence of a difference between the study years validates the idea that contemporary medical education does not include any knowledge or understanding concerning AI.

A great majority of the respondents felt that AI could not replace physicians; instead, they thought that it could be an assistant or a tool to help them. Likewise, the majority of students surveyed in other studies perceived AI as a partner or a tool rather than a rival [16, 18]. Nevertheless, half of the students in this study were concerned about a reduced need for physicians and subsequent unemployment. This concern for the ways AI might negatively affect professional income and opportunity has been revealed in other studies as well, although their participants were less concerned, ranging between 29.3 to 38.6% [15,16,17, 28]. In addition to worries about personal opportunity and job security, it is not difficult to foresee that AI will have important effects on clinical care and therefore raises concerns about professionalism in medicine. Indeed, several scholars rightly argue that AI would be incapable of deep conversation and empathy toward the patient, which would cause distrust [18, 29]. Niet and Bleakley emphasized the intricate structure of clinical care based on clinical intuition and said that this could not be accomplished by technological care [30]. Mehta defined this amorphous quality of intuition as the “art of care” which he concluded was not possible with the AI [20]. The concern about a reduced need for physicians and unemployment might be a result of students’ unpreparedness for AI technology in medicine. Nearly half of the students believed that incorporating AI into medicine would devalue the profession, diminish its humanistic component, and erode confidence in patient-physician interactions. These are not negligible concerns. Instead of ignoring them as unjustified reactions, they must be addressed both by medical education and regulations aiming to protect those values and the fiduciary nature of the profession.

On the need for education

We also examined the students’ education level on AI, and their thoughts on the need for specific educational topics to be integrated into the medical curriculum. The study revealed that the vast majority of participants did not receive a structured and consistent education about AI; only 2.8% reported feeling educated about the application of AI in medicine. Also, just one-third of respondents indicated a favourable response to the question “Can you assess the reliability of a diagnostic application using AI?”. Additionally, despite the future physicians’ responsibility to provide understandable and reliable information to their patients about AI applications in medicine [7, 14, 20, 30], just 6.0% of the participants felt qualified to inform patients about the features and hazards associated with AI technologies. Moreover, in terms of possible influences of AI in medicine, the lowest agreement was on the statement “Violations of professional confidentiality may occur more”. This is a remarkable finding, since protecting confidentiality is among the prominent concerns and one of the most important problem areas in the Big Data age. It is well known that healthcare data is one of the most valuable kinds of data since its abuse or breach could be very harmful to a person and society [31]. Failure to be aware of that kind of risk shows a serious need for a specific education for medical students. Participants’ overconfidence in protecting professional confidentiality and their feeling of incompetence regarding informing patients signify a need for education. Since Pinto dos Santos’s first research on what nation was published in 2019, similar results have been shown in investigations undertaken in other nations [15, 17,18,19, 21, 28]. In a recent review, Grunhut et al. wrote that “Students’ knowledge of AI is alarmingly low and insufficient to become future physicians” [14]. As Sapci and Sapci revealed in their systematic review, the integration of AI training into medical and health informatics curricula is an important need for future physicians [32]. A recently developed scale (MAIRS-MS) to measure medical students’ readiness for AI in medicine could provide a starting point in that regard [33].

What to teach and how

Parallel to the lack of education and the feeling of incompetence, the students thought that AI should be part of the medical training, as was revealed in the other studies as well [15,16,17, 19]. Although there are some recommendations in the literature for curricular objectives [7, 8, 11, 14, 32, 34], Lee et al. concluded in their review that almost all curriculum recommendations lacked specific learning outcomes and were not based on a particular education theory [11]. Taking into consideration the medical students’ opinions could be useful for developing a consensus regarding desirable learning outcomes and appropriate educational theory. Wood et al.’s study with 117 medical students is the only one, to the best of our knowledge, that investigated the importance of AI topics in the eyes of the students [17]. In that study, medical genetics and genomics, radiology and digital imaging, individualized health data/device monitoring, and disease prediction models were regarded by the students as the most important ones among the seven topics. As a contribution to the findings of that study, we found that the students expressed a desire to gain specific knowledge and skills on many more topics related to AI, such as applications for assisting clinical decision-making and reducing medical errors, AI-assisted emergency response, and AI-assisted risk analysis for diseases.

In addition, we found that the students would also like to be trained on the ethical issuesnnthat may arise due to AI applications; they felt this was one of their most important topics. This expectation echoes Grunhut et al.’s question: “How can a physician untrained in the field of AI expect to navigate ethical scenarios such as if a computer algorithm predicts a high chance of death for a patient?” [14]. AI in medicine will inevitably raise new ethical challenges alongside the traditional ones, therefore the knowledge and skills to be able to prevent and solve ethical problems must be an essential part of any educational endeavour. In that regard, the AMEE guide on artificial intelligence in medical education recommends that “Complex issues already inherent in medical informatics’ ethics need to be built into medical AI as guiding principles. Only by including these ethical principles into AI, can AI move from Artificial Intelligence to Artificial Wisdom” [8]. Lee et al. summarized their review of the recommendations in the literature as “the ethical and legal implications of AI systems were considered essential in ensuring safe and informed use of AI systems, and specific learning objectives should include (1) frameworks to approach AI ethics and (2) facilitating discussions of important AI ethics topics like liability and data privacy”.

As for the methodology, Wartman and Combs defined an education model aimed at providing the ability to integrate and use information from increasing sources in an elective way, replacing the current medical education model, which is largely based on the rote learning [6]. According to Grunhut et al., the curriculum should be incorporated through previously proven methods in similar drastic curricular changes [14]. Cross-disciplinary courses, small-group sessions, experiential learning/providing opportunities for students to work directly with AI tools, e-modules, interactive case-based workshops, self-learning modules, and student site visits to learn about the creation of AI products are among the suggested methods to teach students AI basics and improve their understanding of AI ethics [11, 19, 34]. Multidisciplinarity is regarded as crucial while implementing those learning strategies. The AMA Council encourages the review of medical curricula and urges medical school deans to be proactive in recruiting non-clinicians such as data scientists and engineers [14]. Establishing partnerships with institutes across computer science, biomedical engineering, the basic sciences, and public health, organizing ‘hackathons’ and ‘datathons’ in collaboration with computer science and engineering students are suggested in the literature in that sense [11, 34].

Besides developing the content and the methodology of specific education, adapting to those changes could be one of the most important challenges for today’s medical educators. Grunhut et al. state medical school faculty simply have no understanding of how to implement these changes [14]. Therefore, educating educators seems a necessity to improve the traditional approaches and implement this growing set of recommendations.


The online survey method and voluntary participation require consideration of the possibility that the survey was completed only by students who were interested in the subject. However, the fact that students’ self-evaluations are compatible with other similar studies in the literature suggests that they put forward the problems in an impartial way. Although it is not possible to generalize the results to the country, we achieved a high number of participants from different regions. The 12-item questionnaire obtained at the end of the study gave the desired results in terms of factor analysis and validity. The absence of another measurement tool on this subject and the lack of re-testing in terms of the design of the study did not make it possible to conduct further validation studies. There is a need for studies designed in a more homogeneous sample.


By admin

Leave a Reply

Your email address will not be published. Required fields are marked *