Initially created to be a collaborative AI service, Google Bard promises to compress complex data so users can easily get the answers they need and learn more.
However, it made quite a headline during its first demo recently when Google Bard made a factual error by providing an inaccurate response to the prompt about discoveries from the James Webb Space Telescope. Several astronomers pointed out the mistake on Twitter, which led to this demo blunder.
Continue reading to know more about this controversy surrounding Google Bard and how this mistake has affected its intent to change the way we search for information online.
Table of Contents
What Can Google Bard Do?
Last February 6, 2023, Google released Bard to users who registered on a first-come-first-served basis in the US and UK. It was marketed to be an experimental conversational AI service, powered by a research large language model (LLM), which is an optimized version of Language Model for Dialogue Applications (LaMDA).
Simply put, Bard uses the internet to answer questions, instead of giving web links and web pages like a search engine would. It can also be prompted to write poems and essays. As it is a large language model, it is expected to generate human-sounding text in an easy-to-understand format.
It also has a “Google it” button which gives users a direct tool to fact-check and find out what sources have been used in its reply to the prompt.
Google Bard’s Factual Errors During First Demo
In its first demo, Bard was given the prompt: “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”
Bard responded with several answers, including one that said JWST took the very first pictures of a planet outside of the Earth’s solar system. However, the first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, according to NASA’s website. Google posted this query on Twitter to promote Bard, where some astronomers reacted that the information was incorrect.
Many social media users also quickly pointed out that the company could have fact-checked Bard’s claim by searching for it through Google.
After this factual error, Google’s parent company Alphabet Inc. is reported to have lost $100 billion in market value, with shares sliding down as much as 9% during trading hours.
What Caused the Factual Errors During Google Bard’s First Demo?
The factual error committed by Bard during its first demo highlights the biggest problem in using AI chatbots. They can provide inaccurate, misleading, or false information while presenting it confidently, which is a dangerous way for people to use and make as an alternative to search engines.
While Bard is powered by a research large language model that will be updated with newer and more capable models over time, it is still limited right now inasmuch as data is concerned. Like in other LLMs, its credibility and flexibility are dependent on the training it has learned from. The more people using the model, the better LLMs get at predicting what responses might be helpful.
As Bard utilizes information from the web to provide responses, part of its limitations is the possibility of sharing misinformation and displaying bias due to the publicly available sources on the internet that reflect diverse opinions and perspectives. It is also susceptible to being manipulated into saying alarming things.
What Are the Implications of These Errors?
Google aims to give people reliable and useful information. When one of its major products, like Bard, commits factual errors, it has major consequences, particularly on its credibility.
With this in mind, Google highlights the importance of undergoing a rigorous testing process that combines external feedback with internal testing. This is to make sure Bard’s responses meet a high measure of safety and quality.
Like any AI breakthrough, Google Bard sparked varied opinions among technology experts. Those in favor share the excitement of taking advantage of the chatbot to automate repetitive tasks. As it produces results in plain language, it makes browsing faster and more efficient, especially for those who are curious to know more about a particular topic.
Critics also pointed out the dangers of its abilities, including the possibility that AI will replace human research and critical thinking, ultimately leading to the loss of jobs for some.
What Is Google Doing To Fix the Errors on Bard?
Google is committed to continuing Bard’s development guided by the company’s AI principles with ongoing research and development. It continues to use human feedback and evaluation through a series of trials to select participants.
To further improve Bard’s capabilities, it plans to include coding, more languages, and multimodal experiences. It seeks to achieve this while getting user feedback and encourages people to sign up at its website as it expands its demo to include more countries and languages.
Is Bard the Future of Search?
Question-and-answer chatbots have been among the first wave of products being built on Artificial Intelligence (AI), a technology that leverages computers and machines to mimic the capabilities of the human mind.
While Google Bard has been trained on millions of online resources, it still has a set of limitations, particularly on accuracy, bias, and vulnerability to giving problematic or sensitive information when prompted. Despite that, it has shown in its first set of demos its usefulness in supporting productivity, creativity, and curiosity.
Google Bard has generated a lot of reactions and feelings from both technology experts and ordinary internet users. But what’s interesting to see as Google continues to improve the technology around Bard is if it can be the future of search. That remains to be proven in the years to come.