Be careful how you use chatbots, Google tells employees

Despite being one of the biggest advocates of AI, Google has cautioned employees regarding the use of chatbots, including its own Bard programme.
By: | June 19, 2023

Alphabet, the parent company of Google, is advising its employees to exercise caution when using chatbots, including its Bard programme. At a time when Google is expanding the global reach of Bard, the tech giant has reportedly informed its employees not to input confidential information into AI chatbots, citing its longstanding policy on safeguarding data.

Alphabet has also cautioned its engineers against directly using the computer code generated by chatbots. When asked for comment, Alphabet acknowledged that Bard could provide undesired code suggestions but emphasised its usefulness to programmers and expressed its commitment to transparency regarding the limitations of its technology, reported Reuters.

Google’s cautionary approach aligns with emerging security standard among organisations, which involves cautioning employees about using publicly available chat programmes. Organisations worldwide, including Samsung, Amazon, and Deutsche Bank, have reportedly implemented similar guidelines for AI chatbots.

READ MORE: AI may have a greater effect on higher-educated employees

A survey by networking site Fishbowl revealed that approximately 43% of professionals were already using ChatGPT or other AI tools as of January 2023, often without informing their superiors.

Insider also reported that in February 2023, Google instructed its employees testing Bard prior to its launch not to disclose internal information to the chatbot. Currently, Google is expanding Bard’s availability to over 180 countries, supporting 40 languages, which the aim of fostering creativity.