Google, the tech giant known for its innovative products, launched its AI tool, Bard, in February of this year. Initially perceived as an alternative to traditional search engines, Bard was expected to revolutionize the way people find information. However, as time passed, it became evident that Bard is still a work in progress, and its reliability is not guaranteed. The chatbot has faced criticisms and concerns regarding its accuracy, leading Google UK’s boss to advise users to cross-check Bard’s responses with the conventional Google search engine.
In an interview with the BBC, Google UK Managing Director, Debbie Weinstein, acknowledged that Bard might not always provide trustworthy information. She stressed that Google understands the significance of being a reliable source for accurate information, urging users to use Google’s primary search engine when seeking specific details. This statement highlights the current limitations of Bard and indicates that it may not be the go-to platform for critical information.
Reports emerged in April, revealing internal concerns among Google employees regarding Bard’s responses. According to Bloomberg, 18 current and former employees expressed worries about the chatbot providing low-quality information. The concern appeared to be linked to Bard’s efforts to keep up with its competitors, potentially compromising the prioritization of ethical commitments. This internal document raised questions about the information Bard provides and its adherence to Google’s values.
More recently, the trainers responsible for teaching Bard came forward with their grievances. Contract workers disclosed that they were overworked, underpaid, and stressed while reviewing Bard’s answers. The workload and complexity of tasks increased significantly after Google entered a competitive race with OpenAI. Workers, without proper training, were expected to assess answers across diverse subjects, from medicine to law. The lack of appropriate guidance, coupled with extremely tight review deadlines, added to the trainers’ concerns.
One contractor expressed, “As it stands right now, people are scared, stressed, underpaid, and don’t know what’s going on.” This culture of fear and uncertainty adversely affected the quality and teamwork among the trainers. A Google contract worker even warned Congress about the potential risks, stating that Bard could become a “faulty and dangerous product” if content reviews continue under such intense pressure. The issue of remuneration was also highlighted, with some contract workers earning as little as USD 14 per hour for their labor.
Google’s AI tool, Bard, had ambitious aspirations to change the way people access information. However, its journey has been riddled with challenges and uncertainties. Despite continuous improvements, Bard’s reliability remains questionable, and Google’s UK boss urges users to rely on the traditional search engine for crucial information. The concerns raised by both Google employees and contract workers call for a reevaluation of Bard’s development process to ensure the delivery of high-quality and dependable responses.