developers The Google They teach AI to interpret jokessomething far from being ordinary, can enhance deep technological progress In the way that these systems achieve learn automatically to analysis s Answer for human language.
The goal is to push the boundaries of Natural Language Processing (NLP) technology, which is used for large language models (LLMs) such as the GPT-30 that, for example, allow chatbots to reproduce increasingly accurate human communication, which, in more advanced cases, makes it difficult Distinguish whether the interviewer is a human or a machine.
Now, in a recent article, The Google research team claims to have trained a language model called PaLM that is capable of not only creating factual text, but also interpreting and explaining jokes told by humans.
In the examples accompanying the document, the Google AI team demonstrates the model’s performance capability Logical thinking and other language tasks compound Which is highly context dependent, for example using a technique called The meaning of thought series Which greatly improves the system’s ability to Analyze multi-step logical problems by simulating the thinking process of a human being.
Through “explaining jokes” the system explains this Get the jokeand you can find a plot trick, a word game, or a sarcastic walk in the joke line, as you can see in this example.
joke: What is the difference between an umbrella and a zebra? One is a striped animal attached to horses, and the other is a device you use to keep rain from falling on you.
Artificial intelligence explained: This joke is a joke. The joke is that the answer is obvious and the joke is that you expected a funny answer.
Behind PaLM’s ability to analyze these claims is one of the largest language models ever built, with 540 billion parameters. Parameters are the elements of the model that are trained during the learning process each time the system receives a sample of data. PaLM’s predecessor, GPT-3, contains 175 billion parameters.
The increasing number of parameters allowed the researchers to produce a wide range of high-quality results without spending time training the model on individual scenarios. In other words, the performance of a language model is often measured by the number of parameters it supports, with the largest models capable of what is known as “Learn in a few tries”or the ability of a system to learn a variety of complex tasks with relatively few training examples.
“Proud web fanatic. Subtly charming twitter geek. Reader. Internet trailblazer. Music buff.”