Google employees call its Bard AI chatbot “a liar,” “useless,” and “cringe-worthy”

Facepalm: It’s no secret that Google rushed out the company’s Bard chatbot last month as it tried to keep up in the generative-AI race. But the tech giant should have taken more time with its project. According to a new report, employees told Google not to launch Bard, calling it a “pathological liar,” “cringe-worthy,” and “worse than useless.”

We heard back in February that Google was rushing to launch its own ChatGPT-like technology over fears it would be left behind in the generative-AI revolution following the arrival of OpenAI’s tech. A month later, Bard was shown off to an unimpressed public in a demo that saw the chatbot give a wrong answer. Nevertheless, Google decided to launch Bard in March.

According to a new report from Bloomberg, citing internal documentation and 18 current and former employees, it might have been more prudent for Google to keep polishing Bard before allowing early access to the “experimental” AI.

Some of the criticism included an employee writing, “Bard is worse than useless: please do not launch,” in an internal message group seen by 7,000 people, many of whom agreed with the assessment. Another employee asked Bard for suggestions on how to land a plane, to which it regularly gave answers that would cause a crash. Bard also gave answers on scuba diving that “would likely result in serious injury or death.”

Despite workers’ pleas, Google “overruled a risk evaluation” submitted by an internal safety team warning that Bard wasn’t ready for release.

Bloomberg’s sources say Google, in a bid to keep up with rivals, is offering low-quality information while giving less priority to ethical commitments. It’s claimed that staff responsible for the safety and ethical implications of new products have been told to stay away from generative-AI tools in development.

SEE ALSO  WhatsApp finally launches its channels to compete with Telegram

Google, which removed its “Don’t be evil” motto from its code of conduct in 2018, fired AI researcher Timnit Gebru in 2020 after she authored a research paper about unethical AI language systems. Margaret Mitchell – the co-lead of the company’s Ethical AI team who also wrote the paper – was fired a few months later due to misconduct allegations.

Meredith Whittaker, a former manager at Google, told Bloomberg that “AI ethics has taken a back seat” at the company.

Google is falling behind in its generative AI ambitions at the same pace that Microsoft is pushing ahead with its own. The integration of AI features into Bing has not only seen it pass 100 million daily active users for the first time in the browser’s history but also led to Samsung considering switching from Google to Bing as its devices’ default search engine.