During the first of two Google I/O keynotes this week, Google announced LaMDA 2, the follow-up to an AI system, LaMDA, that the company introduced at Google I/O 2021. Short for Language Models for Dialog Applications, Google claims that LaMDA 2 can break down complex topics into straightforward, digestible explanations and steps as well as generate suggestions in response to questions.
LaMDA 2, an AI system built for “dialogue applications,” can understand millions of topics and generate “natural conversations” that never take the same path twice, Google says. Like most AI systems, LaMDA 2 learns how likely words are to occur in a body of text — usually a sentence — based on many, many examples of text. Examples come in the form of documents within training datasets, which contain terabytes to petabytes of data scraped from social media, Wikipedia, books, software hosting platforms like GitHub and other sources on the public web.
During an onstage segment of the keynote, Google CEO Sundar Pichai walked through a demo where a user asked LaMDA 2 to describe the Marianas Trench in a series of questions. The model responded to queries about what creatures might live in the trench and others pertaining to topics it hadn’t explicitly been trained to answer, such as submarines and bioluminescence. In another demo, LaMDA 2 provided tips about planting a vegetable garden, offering a list of tasks and subtasks germane to the garden’s location and what might be planted in the garden, like tomatoes, lettuce or garlic.
The jury’s out on LaMDA 2’s accuracy across tasks, however, considering that the system’s predecessor sometimes gave untruthful “facts” in internal tests — in one case repeatedly offering false information about Mount Everest. Pichai acknowledged that models like LaMDA 2 aren’t perfect but emphasized the sophistication of the technology’s high-level capabilities while pledging that work is ongoing to address the shortcomings.
“These experiences show the potential of language models to one day help us with things like planning, learning about the world and more,” Pichai said.
Alongside LaMDA 2, Google unveiled AI Test Kitchen, an interactive hub for AI demos powered by models like LaMDA 2. Available as an app, users can interact with the models in constrained ways, like exploring a particular topic with LaMDA 2 (eg, dogs ) and drilling down into subtopics within that topic (how dogs smell). Google says that it will continue to add “other emerging areas of AI” into AI Test Kitchen, both in the natural language processing domain and beyond it.
“Each [demo in AI Test Kitchen is] meant to give you a sense of what it might be like to have lambda in your hands and use it for things you care about,” Pichai said. “These are not products — they are quick sketches that allow us to explore what [models like LaMDA 2] can do with you.”
AI Test Kitchen will roll out in the US in the coming months but won’t be widely available. Google hasn’t fully decided how it will offer access, but according to The Verge, the company is weighing reaching out to select academics, researchers and policymakers to begin with.