Google is gradually rolling out its Bard search chatbot, which is already available to some users from the Czech Republic. His engine named LaMDA however quickly will replace different technologies.
Google has been developing lamda artificial intelligence for years. He officially showed it to the world in May 2021. But according to the company’s head, Sundar Pichai did not fulfill it waiting Partly because Google, given its size, can’t afford to put a feature in a key product that will occasionally write nonsense.
PALM is 4 times larger than LaMDA
So it’s your turn braya generation of newest technology with 540 billion parameters (LaMDA has 137 billion, GPT-3 137 billion, and in the case of GPT-4 there is no verified figure, but it is speculated to be several times higher). However, it will not happen immediately and most likely again not for everyone, because with more parameters, the hardware requirements of artificial intelligence naturally increase.
Therefore, Google must constantly consider whether something like this is worth it. On the one hand, it is being pushed to deploy more general AI by competitors Microsoft and OpenAI, but on the other hand, it also needs to be financially sustainable. Google is a company that lives on advertising, but putting out an alternative to GPT-4 just isn’t going to get them a billion more users.
But it also can’t afford to let its conservative approach lose another billion visitors simply because they’ll run to the new Bing. After all, this is already happening, though so far only among a few enthusiastic early adopters.
Thanks to Android, Google has a big advantage in the form of a stable background. As long as most phones have the Google search bar and Chrome desktop icon by default, they probably have nothing to worry about.
What is an AI parameter
The parameters of the neural network are the weights and their corrective biases. Weights are decision coefficients trained by machine learning to software synapses between neurons. The more parameters, the more comprehensive the AI’s knowledge.
The learned weight (V) is the coefficient for activation function (f) which processes input (X). The game also includes the so-called biasbut we prefer not to complicate it further
But parameters alone are not enough. It also depends on the quality – the variety – of the study data and, of course, on the more efficient architecture of the neural network itself. We discussed the parameters in a separate article on the large LLaMA language model.