Google responds to ChatGPT with its own conversational AI: Bard

Google has released Bard, in response to ChatGPT, the OpenAI tech giant, and Microsoft’s Bing Chat. Although, unlike Bing Chat, Bard does not consult the search results, but all the information it returns is generated by the model itself. But it is designed to help users generate ideas and answer questions. Google wants Bard to become an integral part of its search engine experience.
In a live demonstration that Google did on March 20 at its London offices, Bard brainstormed ideas for a bunny-themed children’s birthday party and gave plenty of tips for caring for houseplants. “We see it as a creative collaborator“, says Jack Krawczyk, director of product at Google.
With this launch, Google is playing it. Microsoft partnered with OpenAI to try to wrest the top spot in search from Google. Meanwhile, Google screwed up right off the bat. In a Bard presentation video posted in February, the chatbot was shown making a factual error. ANDThe value of Google fell 100,000 million dollars (92,700 million euros) overnight.
Google is not going to give many details about how Bard works since the great linguistic models, the technology that underpins this wave of chatbots, have become valuable intellectual property. What he has confirmed is that Bard is based on a new version of LaMDA, Google’s main large language model. He states that he will update Bard as the underlying technology improves. Like ChatGPT and GPT-4Bard is refined using reinforcement learning from human feedback, a technique that trains a large linguistic model to give more helpful and less harmful responses.
Google has been working on Bard behind closed doors for a few months, but still claims it’s an experiment. The company is now making the chatbot available free of charge to users in the US and UK who sign up for the waiting list. These early adopters will help test and improve the technology. Zoubin Ghahramani, Google’s Vice President of Research, explains: “We will take user feedback and build on that over time. We are aware of everything that can go wrong with great linguistic models.“.
However, Margaret Mitchell, head of ethics at Hugging Face (AI start-up) and former co-lead of Google’s AI ethics team, is skeptical of this approach. Since Google has been working on LaMDA for years, she says that presenting Bard as an experiment “is a public relations trick that big companies use to reach millions of customers, while exonerating themselves if something goes wrong.”
Google wants users to think of Bard as a companion to Google search, not a replacement. A button below Bard’s chat widget reads: “Google itThe idea is to encourage users to go Google to check out Bard’s answers, or get more information. “It’s one of the things that helps us compensate for the limitations of technology,” Krawczyk says.
“We want to encourage people to explore other places, and confirm facts if they’re not sure,” shares Ghahramani.
This recognition of Bard’s flaws has influenced chatbot design in other ways as well. Users can only interact with Bard a few times per session, as the longer language models engage in a single conversation, the more likely they are to hallucinate. For example, most of the weird Bing chat responses that people have shared on the internet came at the end of long conversations.
Google will not confirm what the conversation limit will be for the launchbut in the initial release it will be low enough to be adjusted based on user feedback.
Google also treads on insurance in terms of content. Users may not request sexually explicit, illegal or harmful material – according to Google’s judgment – or personal information. In my demo, Bard gave me no advice on how to make a Molotov cocktail.. It is something common in this generation of chatbots. But it also wouldn’t provide medical information, such as how to spot the symptoms of cancer. “Bard is not a doctor, he is not going to give medical advice,” says Krawczyk.
Perhaps the biggest difference between Bard and ChatGPT is that the former produces three versions of each response, which Google calls “drafts”. Users can do click between them and choose the answer they prefer, or combine them. The goal is to remind people that Bard can’t generate perfect answers. “When you only see one example, it produces a sense of authoritarianism. And we know that there are limitations around veracitysays Krawczyk.
In my demo, Krawczyk asked Bard to write an invitation to his son’s birthday party. Bard did it, with the direction of Gym World in San Rafael (California, USA). “It’s a place I go through regularly, but I couldn’t say the name of it. That’s where Google search comes into play.” Krawczyk did googled to make sure the address was correct. And it was.
Krawczyk says that, for now, Google does not want to replace its search engine: “We have spent decades perfecting that experience.” But this may be more of a sign of Bard’s current limitations than a long-term strategy. In its announcement, Google states: “It also we are going to integrate the LLM (natural language processing model) in the search in a deeper way, in the future”.
That may happen sooner rather than later, as Google finds itself in an arms race with OpenAI, Microsoft, and other competitors. “They’re going to keep rushingregardless of whether the technology is ready. As we see how ChatGPT integrates with Bing and other Microsoft products, Google will be forced to follow suit,” says Chirag Shah, who studies search technologies at the University of Washington in the US.
A year ago, Shah co-wrote an article with Emily Bender, who studies large linguistic models, also at the University of Washington. In it they pointed out the problems of using large linguistic models as search engines. At the time, the idea still seemed hypothetical, as Shah he was concerned that they had overreached.
However, this experimental technology has been integrated into consumer products with unprecedented speed. “We didn’t anticipate this would happen so fast.but they have no choice. They must defend their territory,” Shah says.