LaMDA stands for “Language Model for Dialogue Applications” and according to Google it can “can engage in a free-flowing way about a seemingly endless number of topics”. It’s essentially a chat bot, but a very advanced one. It's a chatbot which can hold a conversation in a much more human fashion, one that covers a broad range of topics and might during its course cover many different things.
One of the key abilities here is being able to engage with the conversation in a way that is being influenced by its previous content. It can understand the context of the question and gives answers within that. Interestingly Google have only given demonstrations of LaMDA in action, rather than releasing it for people to try out. So at the time of writing, there is no publicly available version to try out.
The problem with these systems is making them work reliably. After all, they tend to be only be as good as the data they are built upon. That’s something that was highlighted by some Google employees in the paper ‘The dangers of Stochastic Parrots’. That paper asks the question: Is bigger, better? Or as they put it:
“How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web”
There are a number of papers which also cover similar content and the paper in itself doesn’t pose any particularly damming problems specifically for Google. However they allegedly fired the employees who contributed to the paper.
What’s interesting about that paper and the way Google reacted is they are specifically highlighting the problems with digesting large volumes of text, because quality becomes an issue. Large does not necessarily mean varied or accurate. So for instance, if you were trying to build a system that absorbed the huge volume of information within Google's index of the web, this paper shows that as potentially giving poor or innacurate results.
Alongside LaMDA Google also announced MUM, which is much more obviously focused on Search. Google explains:
‘MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful.’
So we know it works on a similar framework to BERT and it’s reasonable to assume this is the next generation of that technology. What 1,000 times more powerful means isn't made clear. I'm not sure how you would objectively measure power in order to give that kind of figure.
They do, however, give some examples of what the search results could look like with MUM in use:
One of the most remarkable things here is not just how Google is behaving, or at least how they predict it will, it’s how we are interacting with it.
The example they give is:
“I’ve hiked Mt. Adams and now want to hike Mt Fuji next fall, what should I do differently to prepare”.
There’s no way we would characterise this as a ‘search term’ but yet that is functionally how it’s been used. Google is taking this and using it to provide options that can be refined into search results. How much of the decision making process is going to be the user's and how much is going to be Google’s is another aspect. Are we going to be aware of what's happening 'under the hood'? What previously legitimate results we are not going to see? How much of our consumer power are we going to be giving to Google... a company that makes its money from inserting ads into those results?
The potential for the line between organic and paid results to become even more blurred is one problem. But if we move further towards a conversation than a search, then we’re going to be moving towards ‘an answer’ rather than search results.
This is potentially a fundamental shift in the way that search works, to the point that it may end up in a position where it's not really search anymore. I’ve written previously how Google is moving from a ‘Search Engine’ to a ‘Connection Engine’ but this is even deeper than that. It might mean that we don’t have a search results page at all, but a 'conversation results page'.
What would the purpose of a conversation with Google be?
We go to Google to find things - information, products, services. It’s all us, looking for something. If Google was a person why would we be talking to them? It would be to find something out. The more information we give, the more refined the replies become.
Google describes MUM like this:
Take this scenario: You’ve hiked Mt. Adams. Now you want to hike Mt. Fuji next fall, and you want to know what to do differently to prepare. Today, Google could help you with this, but it would take many thoughtfully considered searches — you’d have to search for the elevation of each mountain, the average temperature in the fall, difficulty of the hiking trails, the right gear to use, and more. After a number of searches, you’d eventually be able to get the answer(s) you need.
But if you were talking to a hiking expert; you could ask one question — “what should I do differently to prepare?” You’d get a thoughtful answer that takes into account the nuances of your task at hand and guides you through the many things to consider.
So instead of multiple searches, looking through multiple pages and places, and finding out information for ourselves, Google is seeking to make that process of research redundant. For us to simply ask them what product we should buy, and be presented with a solution.
E-A-T the Parrots
One of the problems with LaMDA is making sure that the information it’s fed is reliable. You can’t just feed it the entire web and cross your fingers. Well, you can, it’s just not likely to end well.
Google however have spent the last 20+ years developing and refining an algorithm to sort, identify and rank content. Google's more recent focus on E-A-T, which stands for Expertise, Authority and Trust specifically aims at identifying and rewarding content which is the most reliable.
So Google could, in theory, combine its ability to rank content and display only the most reliable (E-A-T) and pertinent content (MUM) with conversational based queries (LaMDA) and there you have a completely new way of accessing content on the web.
We know that voice search is important for Google as one of the ways to make it easier to interact with mobile devices. This technology could wind up being the way in which search works in future. Google will refine our queries through a conversational process to arrive at a far smaller curated list of results, most likely in a uniquely formatted page generated and populated through the information conveyed in that conversation. These conversation results may mean that we have less choice, with more of the decision-making process being handled by Google.
I can't help but think this could be problematic for smaller businesses, retailers that might appear in positions 3-5 for the majority of results. Or that we simply end up in algorythmic blind spots, where conversations are continually steered down paths that take consumers away from certain options. With the current results page we can see what's at the top and what's not. Although only the top few positions really matter, we can see where other results are placed. There is a kind of oversight that we all have on how Google is performing.
If we see conversation results we get the equivalent of the top few results, but without seeing the results that Google has trimmed off in the pocess. Like much of how modern AI works, it's a black box. Google will need to allow us to see the results it's picking from and the information it discards as the conversation continues, in order to give some oversight into the process.
It remains to be seen whether Google can deliver all this. If it does, it may be one of the most significant changes to how Google works and the industry as a whole since its very beginning.