If you’re using AI chatbots in a regulated industry (like healthcare or banking), have your end-users shared any discomfort with using them or distrust of their output?
Sort By:
Oldest
Global Chief Cybersecurity Strategist & CISO in Healthcare and Biotech21 days ago
Yes, end-users have expressed discomfort and distrust of AI chatbots across industries, and with good reason! Concerns often stem from data breaches and inaccurate responses. It’s crucial to address these issues by implementing strong data security measures, clearly communicating them to users, ensuring response accuracy, seeking feedback, and being transparent about data handling practices.Chief Data Officer in Media21 days ago
I have heard both concerns from multiple clients. Building small (100M - 1B parameters) language models that run on low-cost hardware works very well. Developing a single platform where all ML and AI tools are available helps keep shadow tool usage to a minimum.
We dont recommend any medicines or health related issues etc on chat but soon would venture out into it.