Google Opens Its AI Chatbot, LaMDA, For Public Usage

The tech firm, Google, has launched its experimental AI chatbot for Public usage. Hence, users can sign up to interact with the AI-based chatbot trained using the firm’s language model.

Google Opens Public Access To LaMDA Chatbot 

Meanwhile, Google has warned users that its LaMDA (Language Model for Dialogue Applications) AI chatbot may show some inappropriate or inaccurate content. The ‘AI Test Kitchen’ project developed by the tech firm is an application where users can use the chatbot.

Also, they can provide feedback about their experience using the chatbot on the application. According to Google, the company’s goal is to learn to improve the chatbot by opening it gradually to some users.


Crypto Comeback Pro is the #1 cryptocurrency trading robot for investors! This trading tool has a %88 winning rate on trades and is the recommended trading software for cryptocurrency traders. Try The Trading Software For FREE Today. (Advertisement)


Sundar Pichai, Google’s CEO, noted that the ‘AI Test Kitchen’ would give users a feel of what it is like to use the LaMDA. Pichai pointed out that these language models have potential as they can generate unlimited data.

Furthermore, the CEO noted that the company had made several improvements to ensure the LaMDA was safe and secure. However, he argued that this is only the start of the journey as the firm still has a long way to go.

“We have made several upgrades to the LaMDA chat by adding more layers of security. However, this does not mean it is entirely secure,” Pichai added.

Public Testing Of AI Chatbots

Meta also launched its chatbots recently while asking for public feedback. Meanwhile, Meta’s chatbot, BlenderBot 3, caused an uproar on the internet when it gave some wrong answers.

Meta’s chatbot still referred to Donald Trump as the US president and tagged Mark Zuckerberg as “manipulative and creepy.” Meta responded by saying that AI chatbots could generate offensive and biased remarks. 

Hence, the company is working on this feedback to improve the chatbot. In July, Google fired one of its engineers after he breached a confidentiality agreement. 

The engineer stated that Google’s conversation AI has feelings and emotions. Meanwhile, Mary Williamson, a research manager at Facebook AI Research, noted that firms do not like testing their chatbots in public.

This is because of the possible damage it could cause to the company’s image if things go wrong. Also, Williamson noted that the best way to improve bots is to launch them for public usage.

Meanwhile, she noted that various companies are working to ensure their chatbots do not make offensive utterances. However, the best way to uncover multiple flaws in these chatbots is to open public access to them.

Furthermore, it is essential to compare the unveiling of AI chatbots by Meta and Google. While Meta released its BlenderBot with few restrictions, Google limited the conversation with the LaMDA to a few functions. 


Chip Timing Global is not responsible for the content, accuracy, quality, advertising, products or any other content posted on the site. Some of the content on this site is paid content that is not written by our authors and the views expressed do not reflect the views of this website. Any disputes you may have with brands or companies mentioned in our content will need to be taken care of directly with the specific brands and companies. The responsibility of our readers who may click links in our content and ultimately sign up for that product or service is their own. Cryptocurrencies, NFTs and Crypto Tokens are all a high-risk asset, investing in them can lead to losses. Readers should do their own research before taking any action.


Leave a Reply

Your email address will not be published. Required fields are marked *

Chip Timing Global 2021 | All Rights Reserved | ✉ Contact | Disclaimer