How AI can earn our trust
I was chatting with a psychologist recently who is dead set against the use of artificial intelligence (AI), at least in his practice. Nervousness about what he described as “the Big Brother” aspect of AI keeps him at arm’s length from the technology. When it comes to AI trust, he simply doesn’t have it.

He’s not alone.
Professionals around the country are looking at the relationship between AI and mental health with a cautious eye. And for good reason.
AI’s “kryptonite” is in its ability to collect, analyze, and monitor terabytes of data, often very personal data right down to how you are feeling today, in nearly an instant. Meanwhile, technology has also fueled the proliferation of data collection points around the practice and remotely. It’s not hard to imagine how some patients might feel that they are being constantly surveilled.
What about security and privacy?
All that data, of course, can save lives. But it’s no secret that healthcare data holds high value to bad actors. That value is only going up as more personal patient data is collected than ever before. As AI continues to develop, security, privacy, and regulatory measures need to step up right along with it. But are they? It’s not clear. AI and AI trust are moving targets and there is valid concern that security, privacy, and regulatory measures are steps behind.
Recognizing AI’s limitations
These are indeed the Wild West days of AI. But even amongst the calamity of the Wild West, there were equivalent “AI-like moments.” The invention of the telegraph, for example, revolutionized communications, business, politics, and society in general.
But just as the telegraph had its limitations and required human oversight, so does AI. That is to say, the best use of AI at present is within tools requiring a human at the controls. To name a few, tools that speed up and streamline routine administrative tasks (we’re doing that and we’re putting therapists and administrators at the controls); diagnostic machines that can speed up image processing (radiologists are doing that and are in charge); algorithms that can vastly improve data management, speed up clinical trials, and get new treatments to market faster (data scientists are doing that and are at the helm); AI mental health bots that can…oops, well not so fast. That’s a tool that may well be taking AI too far down the mental health road.
Mental health therapy requires (among other things) empathy, an understanding of the human experience, and some capacity for moral responsibility. AI bots have no capability for any of those uniquely human attributes.
Saddle up
I’m only scratching the surface here regarding the Jekyll and Hyde applications of AI at the present moment in time. Let’s have this conversation again in a year and see how things have changed.
For now, the point is this: While I understand the hesitation by some (or by many) to embrace AI, I also think resistance is somewhat futile. Our best chance is to roll up our sleeves and work the problem (the technology) now and get it right.
Keep an open mind about AI and continue to explore its upside. Apply AI trust where it includes a human touch (let that be the litmus test) and clearly makes the world a better place. Proceed with a sense of responsibility and solid ethics. Be accountable and transparent. This way, maybe AI can earn greater trust.
Final thought for now: There was another invention from the Wild West era that comes to mind—an innovation that served to revolutionize livestock management; corralling the cattle and keeping people safe from stampeding herds. Like the livestock of the Old West, AI could use a little barbed wire around it.
Dr. Samant Virk, MD, is the CEO and founder of MediSprout.


