The use of artificial intelligence (AI) in business has gotten a lot of attention in the media lately. More and more businesses are using artificial intelligence to analyze large data, simplify complex tasks, enhance interactions with customers, and streamline operations.
So, how can healthcare providers, such as pharmacies, benefit from using artificial intelligence and specific tools such as ChatGPT?
What is AI and why should healthcare providers care about it?
AI, more specifically, generative AI, holds promise for use in healthcare. Generative AI is a type of artificial intelligence that can write code, power chatbots in customer service, and create new content, such as text or images. It can ‘learn’ and adapt without the need for further instructions, using algorithms from patterns in the large amounts of data it accesses.
The most talked about generative AI tool is OpenAI’s ChatGPT. Since November 2022, ChatGPT has dominated the AI tool market with more than 60% market share and more than 14 billion visits to its website just a year after its launch.
AI as an Experiential Learning Aid in a Practical Setting
The University of Waterloo’s School of Pharmacy in Canada began using generative AI in early 2018 to bolster experiential learning to train its pharmacy students and nurse practitioners. The school designed an AI-assisted course on warfarin management, a course which typically requires weeks of experiential learning. Using the tool, Dr. Nagge created AI versions of patients displaying the most typical scenarios a pharmacist faces while handling warfarin.
“I was initially a skeptic,” says clinical associate professor Dr. Jeff Nagge. But using AI to teach students proved to be “just as good or better than traditional learning,” he says.
The school’s use of AI proves that generative AI can be an invaluable tool in preparing pharmacy students for real-world situations, when implemented with the appropriate guidelines.
How can ChatGPT be potentially used in healthcare and pharmacy?
ChatGPT and similar tools can be used in healthcare in the following instances:
- In telemedicine as virtual assistants – ChatGPT can be used to create chatbots and virtual assistants that can help schedule patient appointments as part of telemedicine.
- As support for clinical decisions – AI can help providers in timely information retrieval, case study analysis, drafting medical documentation, and even in translating medical information into different languages.
- As a support tool for better communication between provider and patient – ChatGPT can be used effectively as a means of modifying the tone of messages to suit different patients based on culture, language, or socioeconomic factors.
- As recordkeeping assistants – ChatGPT can transcribe and create automated summaries of patient medical histories as well as function as dictation device with its voice-to-text ability.
Generative learning models like ChatGPT, in a community pharmacy setting, can significantly improve medication adherence and streamline a pharmacist's workflow. Here are some effective ways to integrate these technologies:
- Inventory management - AI can assist in managing inventory by predicting medication demand.
- Medication synchronization - AI can help in synchronizing medication refills, so patients can pick up all their medications at once. This reduces the number of pharmacy visits and makes it easier for patients to keep track of their medication schedule.
- Prevention of medication errors - AI has the capability to cross-check patient information, prescription details, and pharmacy stock. It can help in spotting potential problems like wrong dosages, interactions between drugs, contraindications with drug and disease, and repetitive treatments.
Misinformation and Risk
Given all this potential, what is holding healthcare companies back from fully embracing AI?
There are a number of reasons.
In a study of Jordanian pharmacists in May and June 2023 who used chatbots powered by ChatGPT in a pharmacy, 70% of the participants recognized the advantages of ChatGPT when used as an educational tool for patients, such as marketing, social media or educational content.
However, respondents were concerned about how accurate ChatGPT was when providing medicine-related information. More than 55% of participants were worried about the possibility of bias and ‘discriminatory patterns’ the tool could pick up, based on data it was trained on.
Experts caution against using ChatGPT or any AI tool to generate a diagnosis. The tool itself warns against this with the warning that “ChatGPT can make mistakes. Consider checking important information.” And when asked questions concerning medical conditions, the tool gives a warning about the importance of “consulting with a healthcare professional.”
At the end of the day, AI tools are just that – tools. They are only as effective as the input you give them. For now, the consensus is that AI is safe to use to answer general queries and summarize large amounts of data or text and other non-critical tasks.
Data Breaches, Privacy, and HIPAA
The American Institute of Healthcare Compliance (AIHC) notes the tool’s potential but cautions against its use, noting that safeguards must be in place to prevent confidentiality breaches, because using AI tools, such as chatbots, can expose user personal information, including protected health information (PHI) and personally identifiable information (PII). The AIHC also recognizes that while updates have been made to the Health Insurance Portability and Accountability Act (HIPAA) since its implementation in 1996, technology has far outpaced it in terms of development.
Worries about security and breaches do not seem unfounded.
In early 2023, OpenAI, the creators of ChatGPT, identified a bug in the code that leaked user chat histories, billing information, and partial credit card information, highlighting the need for careful planning when dealing with sensitive data.
At present, the two biggest hurdles to fully utilizing ChatGPT, or any other general purpose AI tool in any healthcare setting, are their lack of HIPAA compliance and concerns about its accuracy in generating medical information.
Taking it to the Next Level: Google’s HIPAA-compliant Med-PaLM 2
In contrast to other generative AI tools such as ChatGPT, Google Bard, YouChat, or any of the other commercially available tools that can only competently perform general tasks, Google’s Med-PaLM 2 was specifically developed for medical-related tasks. Also, unlike all other AI tools presently offered, Med-PaLM 2 supports HIPAA compliance.
Med-PaLM 2 is also notable for getting an unprecedented 86.5% accuracy score in United States Medical Licensing Examination-style (USMLE) questions. Accuracy in answering USMLE-type questions is a benchmark for the evaluation of medical question/answering performance.
In a review of answer quality to medical questions and compared with real physicians’ answers, Med-PaLM2's answers were preferred to doctors’ answers when reviewed by clinicians and evaluated against criteria including scientific factuality, precision, reasoning, bias, and likelihood of possible harm.
While still in development, Med-PaLM is currently being tested at some hospitals, including the Mayo Clinic. Other institutions and providers testing it include pharmaceutical maker Bayer, hospital operator HCA Healthcare, and health informatics company MEDITECH.
As AI tools continue to rapidly evolve, the complexities of deploying the technology to ensure HIPAA compliance are becoming better understood.
Takeaways: How to Deploy AI and Protect Privacy
Here are some important things to consider in protecting PHI and PII when using AI in a healthcare setting:
- Ensuring secure data handling and storage:
- To ensure that the AI system securely manages and stores patient data, using encrypted communication channels for transmitting patient information is a must. Data storage should also follow HIPAA regulations, which means using secure, compliant servers.
- Getting patient consent and providing transparency:
- Explicit consent from patients for using AI in their consultations must be obtained. Patients must be informed about how their data will be used, stored, and protected. Transparency is key in maintaining trust and compliance.
- Minimizing data exposure:
- Ensuring the AI system only accesses the minimum amount of patient data needed for a consultation. Limiting data exposure helps in reducing the risk of privacy breaches.
- Having a breach notification plan:
- Having a plan in place for responding to data breaches, including notifying patients and authorities, is required by HIPAA.
- Performing regular compliance audits:
- The AI system and practices for HIPAA compliance should be regularly audited. This includes checking for updates in regulations and ensuring the system is up to date with these changes.
- Training the pharmacy team:
- Training pharmacy staff on how to use AI tools in a way that complies with HIPAA ensures that they understand the importance of patient confidentiality and the proper handling of patient information.
- Implementing patient identification and verification protocols:
- Implementing robust procedures to verify patient identity before any virtual consultation prevents unauthorized access to health information.
- Securing user authentication:
- Using strong authentication methods for both patients and healthcare providers accessing the AI system can include passwords, biometrics, or two-factor authentication.
- Collaborating with legal experts:
- Working with legal experts who specialize in healthcare law helps ensure all aspects of your AI implementation are compliant with HIPAA and other relevant laws.
In summary, while using AI for virtual patient consultations in a pharmacy setting can enhance service quality and accessibility, it must be done with a strong emphasis on security and compliance with healthcare privacy laws. Careful planning and adherence to best practices for data protection are essential to avoid violating HIPAA and to maintain patient confidentiality.