Has ChatGPT Become Today's WebMD?
Jonathan Freidin, a medical malpractice lawyer based in Miami, has noticed a curious trend a few times weekly. Clients frequently submit detailed contact forms to his firm, often adorned with emojis and titles, a clear indication they have copied text from AI systems like ChatGPT. It's becoming common for clients to assert they have rigorously researched their legal situations through AI. 'We have seen an increase in calls where people believe they have a legitimate claim because an AI like ChatGPT indicated potential faults in medical standards of care,' Freidin shares. However, he points out, this doesn't always equate to a viable legal case.
Generative AI chatbots are becoming a staple for researching anything from complex medical issues to legal dilemmas. According to a 2025 Clio study, 57% of respondents have or would plan to use AI for legal queries. A Zocdoc survey in the same year highlighted that one in three Americans consult AI for health advice weekly, with ten percent doing so daily. Oliver Kharraz, Zocdoc's CEO, anticipates AI will dominate in initial healthcare tasks like symptom analysis and minor queries. Nonetheless, he warns it's no replacement for genuine healthcare, where human insight, compassion, and intricate decisions are crucial.
With the widespread use of generative AI, both medical and legal professionals find themselves guiding clients who might misunderstand AI's data. The availability of information has indeed democratized access but also altered expectations from professionals. Now simple prompts can transform ordinary users into perceived experts, unsettling real professionals who find themselves frequently correcting AI-derived misconceptions.
Navigating the Armchair Expert Phenomenon
Jamie Berger, a family attorney from New Jersey, describes a shift in client interactions. Previously, many were largely unaware of divorce proceedings and would rely on attorneys for insight. Nowadays, clients armed with step-by-step plans—often generic and ill-suited to individual cases—present challenges. Post-email exchanges revealing a sudden change in tone indicate AI-written strategic legal plans. Berger emphasizes the importance of rebuilding trust by clarifying the mismatch between AI advice and the unique aspects of a client's case.
AI chatbots possess convincing eloquence, like seasoned advisors, more influential than many blog entries or online medical summaries. In a survey by Survey Monkey and Express Legal Funding, a third of respondents claimed potential trust in ChatGPT over a human expert, especially for educational and financial guidance, though confidence waned regarding medical or legal subjects.
Time Management in Healthcare
In a world where doctors are under time constraints, chatbots excel; they provide timely responses and affirmative interactions unburdened by the limitations of human schedules. With wearable tech providing extensive health data, instant online answers are increasingly the norm. Dr. Hannah Allen from Heidi, an AI medical assistant platform, notes the appeal of perpetual AI availability: it's always ready to respond, never constrained by office hours.
Heidi Schrumpf from Marvin Behavioral Health shares observations of patients validating her advice via AI, which in turn enhances their confidence in her counseling. This introduces a novel means of receiving immediate secondary opinions, offering patients a means to refine their curiosity and inquiries.
Balancing Trust and Privacy
Despite growing reliance on AI for guidance, trust issues persist. A 2024 poll from KFF exposed that while 17% use AI chatbots monthly, over half doubt the accuracy of the information provided. Without human interaction, the AI's advice remains general, lacking nuance, and over-sharing personal information might compromise confidentiality and legal protections, notes Beth McCormack from Vermont Law School.
OpenAI representatives stress that ChatGPT isn't intended to be a legal or medical advisor but a supplementary resource to grasp complex topics better. Only licensed professionals should offer guidance requiring formal qualifications.
AI: A Bridge or a Gap?
While AI tools can't replace professionals, they offer critical support where resources are short. AI’s ability to break down dense medical and legal jargon makes it invaluable to those unable to afford immediate legal assistance, says Golnoush Goharzad, a California-based lawyer. From courthouse representations to advice on manageable medical issues, AI lowers access barriers.
Yet, the rush towards AI-led advice can sometimes be misguided. Goharzad recounts puzzled conversations about legal actions based on AI input—actions that often lack practical sense. As AI becomes entrenched in the research habits of individuals, recognition and guidance rather than opposition are the paths forward. Clinicians like Schrumpf advocate for AI's role as an ancillary tool, crucial but not infallible.



Leave a Reply