Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google Warns Its Staff About Chatbots

As it begins marketing its own chatbot, Bard, around the world, Google parent company Alphabet Inc. is warning its employees about how to safely use artificial intelligence (AI) chatbots, including Bard, according to a Reuters report that cites four anonymous sources.

Improve the News Foundation profile image
by Improve the News Foundation
Google Warns Its Staff About Chatbots
Image credit: Wikimedia Commons

Facts

  • As it begins marketing its own chatbot, Bard, around the world, Google parent company Alphabet Inc. is warning its employees about how to safely use artificial intelligence (AI) chatbots, including Bard, according to a Reuters report that cites four anonymous sources.1
  • The company reportedly urged its software engineers to avoid direct use of computer code that chatbots can create, as it says AI can reproduce the data it receives during training, risking a potential leak. This recently happened to a Samsung engineer who uploaded code to ChatGPT.2
  • Such concerns have become prevalent throughout the corporate world, with many global companies implementing guardrails on AI chatbots. A survey of 12K US professionals found that 43% were using ChatGPT or other AI tools, often without telling their boss.3
  • Apple last month barred its employees from using both ChatGPT and another service called Github’s Copilot, with Amazon also banning staffers from sharing any code or confidential information with ChatGPT. Banks, including JPMorgan Chase, Bank of America, and Citigroup, have issued similar restrictions.2
  • As Google looks to advance its global rollout of Bard, it's currently not allowed in the EU, with the Irish Data Protection Commission, the bloc's digital regulator, recently ruling that Google hadn’t adequately detailed measures that would protect citizens’ privacy.4
  • Some speculate that Google's employee policy, for which it also cites Bard's ability to make undesired code suggestions, is likely an attempt to protect its reputation as it competes against Microsoft-funded ChatGPT for potentially billions of dollars in investment and advertising revenue.1

Sources: 1Reuters, 2Forbes, 3New York Post, and 4The Wrap.

Narratives

  • Pro-establishment narrative, as provided by CSO. Not only has Google been transparent about the risks of this emerging technology, but it has also been at the forefront of implementing safeguards against risks such as data theft, data poisoning, and malicious chatbot prompt injections by bad actors. The company certainly aims to profit from AI, but it's also spending loads of cash to ensure public safety and privacy.
  • Establishment-critical narrative, as provided by VentureBeat. Tech executives leading the AI discussion believe themselves to be prophets with the power to describe the end times while also offering the guide to salvation. While they issue rhetoric on the potential "existential threat" AI poses, they continue to push for the universalization of the technology, claiming it will bring us to a technocratic Garden of Eden. At this point in time, we shouldn't trust their chatbots or their self-proclaimed wisdom.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More