Imagine accidentally leaking confidential business secrets through a chatbot? This is precisely the scenario that Alphabet Inc, the parent company of Google, is warning its employees about. This comes at a time when it is actively promoting its own chatbot 'Bard' globally.
Google has advised its employees not to input confidential information into AI chatbots, including its own Bard and ChatGPT, another popular program. These chatbots, which engage in conversations with users and generate responses to various prompts, can reproduce the entered data, creating a risk of leaks.
It is important to recognize that this warning reflects the growing trend of companies cautioning their staff against using public chat programs. Major corporations such as Samsung, Amazon, and Deutsche Bank have already taken steps to alert their employees about the use of AI chatbots.
Furthermore, Google is transparent about the limitations of its technology. Despite Bard's ability to make unwanted code suggestions, the program still assists programmers.
What can we learn from Google's precautions? They emphasize that while AI chatbots offer significant benefits, such as quickly drafting emails and documents, they also carry the risk of spreading misinformation, sensitive data, and even copyrighted passages.
Some companies have already developed solutions for such issues. For instance, Cloudflare offers businesses the ability to tag certain data and restrict it from external traffic. Additionally, Google and Microsoft provide conversational tools to companies that limit data absorption into public AI models at a higher price.
This underscores the necessity for companies to be aware of the risks associated with using AI chatbots and to take appropriate measures.
Algemeen
2 min
22 June 2023
Auteur

Lisanne Groot
marketing consultant
Google warns employees about the use of AI chatbots: why does it have implications for your business?


Over Lisanne Groot
marketing consultant