If a doctor is still responsible for the final judgment, people will be happy to use medical artificial intelligence.

Editor’s note: This article is from WeChat public account “Harvard Business Review” (ID: hbrchinese) a>, author Chiara Longoni Carey K. Morewedge, reproduced with permission.

Our recent research shows that even if medical artificial intelligence performs better than human doctors, patients are reluctant to use the healthcare it provides. Why is this? Because patients consider their medical needs to be unique, they cannot be adequately addressed through algorithms. To realize the many benefits and cost savings promised by medical artificial intelligence, healthcare providers must find ways to overcome these concerns.

The performance of medical artificial intelligence can reach expert-level precision and provide cost-effective medical services on a large scale. IBM’s Watson is better at diagnosing heart disease than cardiologists. Chatbot replaces nurses to provide medical advice to the UK’s National Healthcare Service system. Smartphone applications are now able to detect skin cancer with expert-level accuracy.

The algorithm can recognize eye diseases like a specialist. Some predict that medical artificial intelligence will reach 90% of hospitals and replace 80% of doctors’ current jobs. But for this to happen, the healthcare system must overcome patient mistrust of artificial intelligence.

We explored patient acceptance of medical artificial intelligence in a series of experiments with New York University colleague Andrea Bonezzi. Findings to be published in a paper in the Journal of Consumer Research show that from skin cancer screening to cardiac pacemaker implantation, people are struggling to perform in a variety of medical procedures. .

We found that when medical services are provided by artificial intelligence rather than human healthcare providers, patients are less likely to use the service and want to pay less for it. They also prefer to have human healthcare providers perform services, even if it means a higher risk of misdiagnosis or surgical complications.

We found that the reason is not that patients think that the medical services provided by artificial intelligence are worse, or that patients think artificial intelligence is more expensive, less convenient, or less informative. On the contrary, resistance to medical artificial intelligence seems to stem from the perception that artificial intelligence does not take into account individual characteristics and special circumstances. People think they are different, and this idea includes their health.

Others have a cold, however, “my” coldsIt is a unique disease that plagues “I” in a unique way. In contrast, the medical services provided by artificial intelligence are considered to be rigid and standardized-suitable for treating ordinary patients, but not enough to be responsible for the unique circumstances applicable to individuals.

Look at the results of a study we conducted. We provide more than 200 business school students at Boston University and New York University with a free assessment that diagnoses their stress levels and recommends an action plan to help them cope with the stress.

The result: when they were told that a doctor would give them a diagnosis, 40% of them reported their names, but when the diagnosis was made by a computer, only 26% reported their names. (In both experimental conditions, participants were informed that this service was free and that service providers had correctly diagnosed and suggested 82% -85% of previous cases.)

In another study, we surveyed 700 Americans in an online sample bank to test whether patients would choose artificial intelligence providers when their AI performance was significantly better than human healthcare providers. By.

We asked study participants to review the accuracy of two health care providers (referred to as Provider X and Provider Y) in diagnosing skin cancer, the accuracy of medical emergency triage decisions, or the implementation of these providers in the past Information on the complication rate of pacemaker implantation.

We then asked participants to indicate that they were between two providers on a 7-point scale with endpoints of 1 (prefer provider X), 4 (no preference provider), and 7 (prefer provider Y). Preferences. When a participant chooses between two human doctors who perform differently, all participants prefer the human doctor who performs better.

But when they choose between a human doctor and an artificial intelligence provider (such as an algorithm, a chatbot, or a robotic arm that is controlled remotely through a computer program), the participants have a better performance on the artificial intelligence provider. Is significantly weaker. In other words, participants are willing to forgo better health care and let humans provide medical services instead of artificial intelligence.

The resistance to medical artificial intelligence is also manifested in willingness to pay for the same diagnostic procedures. We gave a reference price of $ 50 to 103 Americans in an online sample library to participate in a stress test that could be diagnosed by artificial intelligence or human providers; both had an accuracy rate of 89%.

For example, in the case of default artificial intelligence, participants were told that the diagnostic cost of artificial intelligence is $ 50. Then they have to say how much they are willing to pay for a human provider diagnosis. When the default provider is artificial intelligence, the participant is willing to pay more for a human provider, and when the default provider is human, the participant is not willing to pay so much to change to an artificial intelligence provider. .

Due to strongThe emphasis is on the importance of their special situation. The more participants consider themselves unique and different, the more obvious their resistance to AI providers is. We asked 243 Americans in an online sample database to indicate their preferences between the two skin cancer screening providers. The diagnostic accuracy of both providers was 90%.

Participants felt that their degree of distinctiveness heralded their preference for humans over (samely accurate) AI providers; this had no effect on their preferences between the two human providers.

Medical service providers can take many steps to overcome patient resistance to medical AI. For example, providers can take action to increase the perceived personalization of medical services provided by artificial intelligence, thereby alleviating patient concerns about being treated as ordinary patients or statistics.

When we explicitly described an artificial intelligence provider as having the ability to adjust each patient’s recommendations for coronary bypass surgery based on the unique characteristics and medical history of each patient, study participants reported that they followed the artificial The possibilities of smart provider recommendations are the same as they are to follow the recommendations of a human doctor’s treatment.

For this reason, for purely artificial intelligence-based healthcare services (eg, chatbot diagnosis, algorithm-based predictive modeling, application-based therapy, wearable device feedback), providers can emphasize the relevant Patient information to generate their unique personal data, including their lifestyle, family history, genetic and chromosomal information, and relevant details of their environment.

In this way, patients may feel that AI providers will take into account information that human providers (such as general practitioners who have access to their medical history) will consider. This information can be used to better explain to patients how their medical services can be tailored to their unique circumstances.

Services that are entirely AI-based can include tips that indicate personalization—such as “based on your unique profile.” In addition, medical institutions can make special efforts to spread the message that artificial intelligence providers can really provide personal and personalized healthcare services-for example, by sharing evidence with the media, explaining how algorithms work, and sharing patients Comment on the service.

Leting doctors confirm the advice of AI providers should make it easier for people to accept AI-based medical services. We found that if a doctor is still responsible for the final judgment, people will be happy to use medical artificial intelligence. In a study discussed in our paper, participants reported that they are likely to use a procedure in which algorithms analyze the results of skin cancer scans of their bodies and make recommendations to doctors who ultimately make judgments, like They are willing to receive the same medical services provided by doctors from start to finish.

AI-based healthcare technologies are being developed and deployed at an amazing rate. artificial intelligenceAssisted surgery can guide surgeons’ instruments during surgery and use data from past surgeries to inform new surgical techniques. Artificial intelligence-based telemedicine can provide basic medical support to remote areas where access to healthcare services is difficult.

Virtual assistant nurses can interact with patients 24/7, provide 24-hour monitoring and answer questions. But to take full advantage of these and other consumer-facing medical AI services, we first need to overcome patient concerns about using algorithms rather than one person to determine their medical services.