top of page

Welcome
to our
Innovation Hub

Thoughts. Tech. Transformation

ChatBots Helpful, But a Data Risk

Chatbots are everywhere. Chatbots use artificial intelligence and natural language

processing to understand customer questions and automate responses to them. ChatGPT has become the most widely used ChatBot. Chatbots can provide considerable benefits but they also pose significant data privacy and security risk. They should be managed and controlled like any application that processes or stores protected health information, PHI.


AI chatbots have the potential to improve patient care and public health by helping automate routine tasks that take time and resources. ChatBots can:

 Assist health care providers in providing patients with information about a condition

 Schedule appointments and streamline patient intake processes

 Potentially act as virtual doctors or nurses to provide low-cost, 24 hour care

 Provide health education about disease prevention and management

 Assist with wellness matters, such as monitoring steps taken, heart rates, and sleep schedules.


However, the use of chatbots requires the collection and storage of large volumes of data raising significant data security and privacy concerns. In healthcare, the successful AI models rely on constant machine learning, which involves continuously providing large amounts of personal data into the neural networks of AI chatbots. If the data used to train a chatbot includes sensitive PHI or business information, it becomes part of the data set used by the chatbot in the future. Then this data can be disclosed to all types of other users both good and bad and used without authorization.


When users ask a ChatGPT tool to answer questions or perform tasks, they can easily

inadvertently input sensitive personal and business information and put it in the public

domain. A physician may input his patient’s name and medical condition, asking ChatGPT to create a letter to the patient’s insurance carrier. The patient’s personal information and medical condition, in addition to the output generated, are now part of ChatGPT’s database. This means that the chatbot can now use this information to further train the tool and incorporate it into responses to other users’ prompts.

OpenAI, ChatGPT’s developer, is a private, for-profit company whose business interests and financial growth do not always align with the requirements of HIPAA and other data privacy regulations. OpenAI says it shares user personal information with third parties in certain circumstances without further notice unless required by law. Thus, sensitive information can be leaked into the public domain when using ChatGPT. Once the information is exposed, negative consequences including privacy breaches, identity theft, digital profiling, discrimination, and embarrassment can occur.


Key security and privacy requirements must be considered before ChatGPT can be

integrated into AI-driven health care applications. HIPAA requirements must be met so that electronic PHI and other sensitive data remain private and secure. Users must also ensure the anonymity of the data fed into the chatbot and the implementation of appropriate measures for data security.



In conclusion, It is important to train and reinforce users on the importance of protecting sensitive patient and business information. Implementing clear AI usage policies is critical to keeping data safe. For some people, it might be intuitive not to provide ChatGPT PHI or proprietary information. However, many individuals might not fully understand the risks involved. In healthcare, we should be careful about what AI tools we use and what data we share with them.

 
 
 

Comments


bottom of page