Meta is putting its latest AI chatbot on the web to allow the public to speak

Meta’s AI research labs have created a new state-of-the-art chatbot and are allowing audience members to speak to the system to gather feedback on its capabilities.

The bot is called BlenderBot 3 and is accessible on the web. (Although, right now, it appears that only US residents can do this.) BlenderBot 3 is capable of engaging in general chatter, Meta says, but also answering the kinds of questions you might ask a digital assistant. “from talking about health food recipes to finding child-friendly services in the city.”

The bot is a prototype and builds on Meta’s previous work with what are known as large language models or LLMS, powerful but flawed text generation software of which OpenAI’s GPT-3 is the best known example . Like all LLMs, BlenderBot is initially trained on large text data sets, which it extracts for statistical models in order to generate the language. Such systems have proved extremely flexible and have been used for a wide range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate bias in training data and often concoct answers to users’ questions (a big deal if they will be useful as digital assistants).

The latter problem is something Meta wants to test specifically with BlenderBot. A great feature of the chatbot is that it is able to search the Internet to talk about specific topics. More importantly, users can then click on its responses to see where they got the information from. BlenderBot 3, in other words, can cite its sources.

By releasing the chatbot to the general public, Meta wants to gather feedback on the various problems facing large language models. Users chatting with BlenderBot will be able to report any suspicious responses from the system, and Meta says it has worked hard to “minimize robots’ use of foul language, insults and culturally insensitive comments.” Users must consent to the collection of their data and, if so, their conversations and feedback will be archived and subsequently published by Meta for use by the general AI research community.

“We are committed to publicly releasing all the data we collect in the demo in hopes of improving conversational AI,” said Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3. The border.

An example of a conversation with BlenderBot 3 on the web. Users can provide feedback and reactions to specific responses.
Image: Meta

Releasing prototypes of artificial intelligence chatbots to the public has historically been a risky move for tech companies. In 2016, Microsoft released a chatbot called Tay on Twitter that it learned from its interactions with the public. Quite predictably, Twitter users soon taught Tay to regurgitate a variety of racist, anti-Semitic and misogynistic claims. In response, Microsoft deactivated the bot less than 24 hours later.

Meta claims that the world of AI has changed a lot since the Tay malfunction and that BlenderBot has all kinds of security barriers that should keep Meta from repeating Microsoft’s mistakes.

Basically, says Mary Williamson, head of research engineering at Facebook AI Research (FAIR), while Tay was designed to learn from user interactions in real time, BlenderBot is a static model. This means that it is able to remember what users say within a conversation (and will also retain this information via browser cookies if a user exits the program and comes back later), but this data will only be used to improve the system later.

“It’s just my personal opinion, but that [Tay] episode is relatively unfortunate, because it created this winter of chatbots where every institution was afraid to put out public chatbots for research, “says Williamson The border.

Williamson says most of the chatbots in use today are narrow and task-oriented. He thinks of customer service robots, for example, which often present users with a preprogrammed dialogue tree, narrowing down their query before handing them over to a human agent who can actually get the job done. The real reward is to build a system that can lead a free and natural conversation like that of a human being, and Meta says the only way to achieve this is to allow robots to have free and natural conversations.

“This lack of tolerance for bots saying useless things, in a broad sense, is a shame,” says Williamson. “And what we’re trying to do is release this very responsibly and carry out the research.”

In addition to putting BlenderBot 3 on the web, Meta also publishes the underlying code, training dataset, and smaller model variants. Researchers can request access to the larger model, which has 175 billion parameters, via a form here.

Leave a Reply

%d bloggers like this: