The Pandora’s Box of AI in Healthcare
7/9/25
Where Fear Meets Function
Pierre Hollenbeck

Is artificial intelligence the Pandora’s box of healthcare? Seeing both the fascination and fear that the technology triggers among providers, similarities with the famous Greek tale glow.
Rarely has a technology mixed so much fascination—and so much anxiety. Our experience in the field of highly specialised medical customer service is a perfect user case that enlightens the paradox within. Medicine is in essence one of the very few fields considered a priority use for technological advances. Nevertheless, AI is not part of the group. The opposite is happening with AI: a massive popular use, and the technology is starting to embed itself in the everyday life of consumers while it stays outside of the clinic doors. So what exactly fuels this phenomenon in a field that has always had to be at the forefront of technical evolution? Our user case has some answers.
Why Such a Strong Backlash?
Let’s start with a simple analogy. When a company takes a new employee, a trial period is systematically given—both for the company as well as the employee—to assess the learning capacity and the compatibility of the pair. Mistakes are made, lessons are learned, and hopefully performance improves. Trial and error is part of the process. No one panics when a human trainee slips up; it is considered as the trial and errors period.
Now, if you start introducing AI into the process, expectations are radically different.
Expectations have skyrocketed and tolerance has reached the zero point. An AI mistake is viewed as a threat, a liability, and often a justification to pull the plug on the entire project. Why such a visceral reaction? Why does the mere possibility of an algorithmic error feel so much more dangerous than the human kind?
AI is and should be considered as: an employee. A super powerful one, yes. A very fast one, yes. An ever-learning one, yes. A cheaper one, yes. But an omniscient one, no. That is why we hear the word: AI training. Training means the same thing as with another employee: errors, correction, and acknowledgment of mistakes and limitations.
AI Fear in Medicine: Will Skynet Kill My Patient?
Let’s look at the fears more closely. The recurring panic: “Will AI replace my job?” The reality is much subtler: “Your job won’t be replaced by AI, but it might be replaced by someone using AI.” The threat isn’t the machine itself, but the shift in skills and adaptability that it demands. That is applicable to all current positions in all fields.
Physicians, in particular, have a different approach to AI. They are carrying the heavier burden of the life responsibility. But no one is believing—and even more so the physicians—that AI is holding the scalpel—yet. The physician’s worry about AI rests on the core starting point of any treatment: the diagnosis. Their fear is also driven by the same factors that drove their fear during the internet revolution.
“Doctor, I checked on the internet, and I read that it could be cancer.” (This is an example, but I’m sure that this is quoting some patient, somewhere…)
Physicians dread being forced to contradict a machine’s recommendation in front of a patient—a dynamic that feels both humiliating and risky. Strangely enough, these same doctors are comfortable prohibiting a new staff member from making diagnoses until they’re ready. But when it comes to AI, they fear losing control, as if, once set loose, the algorithm will become unstoppable.
Will AI become some kind of Dr. Strangelove (yes, Kubrick also saw it in 2001: A Space Odyssey, but Dr. Strangelove felt more appropriate), making autonomous clinical decisions and holding the surgical scalpel? Hardly. But the very presence of such questions shows just how much our collective imagination is fueled by dystopian scenarios—not by real-world improvements that AI could offer.
Medical Tourism: Diagnosis at a Distance and the Fear of Discrepancy
Nowhere is this anxiety more pronounced than in medical tourism—a sector built on remote communication and virtual assessment. Every provider knows the reality: the diagnosis made at a distance is often adjusted in person (“This doesn’t look like the pictures!”). This is not a new risk; it’s an old reality of long-distance care. But somehow, when AI is involved, the margin for error becomes intolerable, as explained above.
Let’s flip the script...
Would you allow a brand-new employee, on their very first day, to confirm a patient’s surgery and book their flight? Of course not. Then why would you allow AI to do so?
The real question isn’t about capability, but about control and boundaries. Trial and boundaries are an accepted factor when employing and working with humans. After all, this is what we have all been doing all our lives.
Here comes this new employee… and people are whispering that it might be the best of us… more intelligent than us…
Well, no matter the intelligence (artificial one of course), and just as new staff go through a trial period, AI training is a real thing. When the new way of working side by side with AI will have passed the fear of the unknown, we may be looking at a multi-sector, fast-paced, horizon-changing change of habit. Medical field included.
Yuval Harari explains to us in Homo Deus that machines will be assisting mankind to a higher level, just as technology has pushed our lifespan as quickly as science has advanced. AI will assist the workplace in all sectors and on all levels: managerial, workforce, decision-making level.
The Real Challenge: Redefining the Human Role
Ultimately, the question is not whether AI will replace humans, but how it can augment them—and under what conditions. The real danger is not in opening Pandora’s box, but in refusing to open it at all, paralyzed by fear and fantasy. AI in healthcare is neither a monster nor a miracle. It is a tool, and it’s up to us to decide how—and when—to use it.
What Does AI Actually Change? The Esthetic Planet Experiment
It’s easy to discuss the risks and fears around AI in healthcare, but what about the reality on the ground—what actually changes when a marketplace like Esthetic Planet embraces artificial intelligence?
Let’s put theory aside and look at a concrete case study. Esthetic Planet is a medical tourism platform operating in three primary languages: English, French, and Italian. Historically, the platform faced the same challenge as any international operation: limited human resources meant limited capacity. There simply weren’t enough multilingual agents available, around the clock, to field questions, nurture leads, and convert inquiries into actual bookings. Inevitably, this led to bottlenecks: delayed responses, missed opportunities, frustrated customers. How do you provide world-class service in multiple languages when your human team is finite—and global demand never sleeps?
Enter Nalys, an AI-driven assistant, deployed on Esthetic Planet’s WhatsApp channel.
Here’s where the paradigm shifted: instead of steering consumers through a traditional account-creation process—complete with forms, passwords, emails, and all the friction that entails—Esthetic Planet began redirecting traffic straight to WhatsApp. Why? Because it’s instant, familiar, and brutally efficient. For the consumer, there’s no need to “set up an account” or jump through administrative hoops; instead, you simply open a chat and ask whatever you want, in your own language, at your own pace.
The impact? Nothing short of a revolution. For the first time in its history, Esthetic Planet began answering leads 24 hours a day, seven days a week. Language barriers fell away: with AI handling the first line of communication, queries in French, Italian, English, Czech, Arabic, Spanish, Russian—all received immediate, competent responses. Clients who once would have given up at the first sign of delay suddenly had a personal assistant in their pocket.
The Out-of-the-Box Promise—What About the Failure?
What about the trial period of AI? Did it magically work straight out of the box?
Of course not! And nothing speaks better than experience. Let’s hear about the most noticeable failures:
Failure 1: Missing information on lead generation. The AI would simply “forget” to ask for the client’s email or name.
Fix: Made compulsory the collection of these central data before completing a customer request.
Failure 2: Sent the wrong webpage link to a customer (a contact page).
Fix: The URL logic of the site was explained to the AI; central pages such as the sales conditions, guarantee, contact, and legal were detailed into the AI.
Failure 3: Appointment booking. After a client insisted on a last-minute appointment, the AI confirmed a meeting with the staff.
Fix: Forbid any definitive confirmation of appointments. Only collect availability and escalate to a human.
Failure 4: A client insisted on getting all documents (quotes, appointments) through WhatsApp (this is outside of company protocol and against privacy policy). The AI is instructed to be empathetic and to always encourage solutions with the customer.
Fix: Structured the strict document exchange policy of the company through the client account. Explained advantages and clear boundaries.
In these examples, the machine had failed and learned—and did not repeat the mistake again. Just like a new employee. Notice that all the failures are data-driven. Instructing the right data is the pillar of AI training, just like with a new employee.
What Happens After the Trial Period?
The numbers speak for themselves. Within two months, incoming leads jumped by 250%. Conversations that once languished in email inboxes or got lost in translation now happened in real time, with no lost momentum. Market share began to climb. The diversity of English- and Italian-speaking clients soared, quadrupling in volume. Italian leads, in fact, are now nearly as numerous as Esthetic Planet’s historically dominant French-speaking clientele—a demographic shift the company had never seen before. Even more telling: bookings began arriving from as far as Australia, a market previously out of reach.
So what does this mean for the industry? It’s more than just statistics. It’s a direct challenge to the idea that AI is a cold, impersonal barrier. In practice, the right deployment of AI can make a service more human, more direct, more accessible—precisely because it removes the friction that so often blocks genuine exchange. Instead of “dehumanizing” customer service, AI gave Esthetic Planet the bandwidth and flexibility to be present, instantly, in any language the world demanded.
The Box Is Open
Of course, this is only the beginning. The real test, as always, will be whether the human team can build on this momentum—using AI as a tool, not a crutch. But one thing is clear: the Pandora’s box is open, and there’s no putting the genie back inside.
AI is not here to replace what is uniquely human. It is here to amplify it, to accelerate the process from idea to expression. In healthcare, in customer service, in nearly every industry you can imagine, this will be the real revolution: AI will not erase the need for human expertise or judgment. It will make those human skills more valuable, more visible, and more effective than ever before.
So as we peer into the future—two years, five years, a decade from now—one thing is certain: AI won’t take the place of the human voice. It will help that voice reach further, move faster, and shape the conversation in ways we’re only beginning to imagine. The box is open. The possibilities are ours to invent.
Pierre Hollenbeck

Pierre Hollenbeck is the founder of Medcom, a company specializing in marketing and international healthcare provider support for those entering the European market. Medcom was established in April 2004 and quickly became a pioneer in the French market by offering the first hair transplant services in Turkey. The following years, the company diversified its offerings, providing dental and cosmetic surgery options and operates now on 17 countries.
News