Earlier this year, Google, which was locked in a fast-paced competition with rivals like Microsoft and OpenAI to develop artificial intelligence technology, was looking for ways to boost its research in this area.
So in April, Google has integrated DeepMinda research laboratory he acquired in London, with the AI team Brain that he founded in Silicon Valley.
Four months later, the combined groups are testing ambitious new tools that could turn generative AI — the technology behind chatbots like OpenAI's ChatGPT and Google's Bard — into a personal life coach.
Google DeepMind works with generative artificial intelligence to perform at least 21 different types of personal and professional tasks, including tools that provide users with life advice, ideas, planning instructions and directional advice, according to documents and other materials reviewed by The New York Times. .
The project indicates the urgency of Google's efforts to be at the forefront of artificial intelligence and shows its increasing willingness to assign sensitive tasks to these systems.
In a slide presentation shown to executives in December, the company's AI security experts warned of the dangers of people becoming emotionally attached to chatbots.
Although it has been a pioneer in generative AI, Google was overshadowed by OpenAI's launch of ChatGPT in November, setting off a race between tech giants and startups for priority in this fast-growing space.
Google has spent the past nine months trying to show it can keep up with OpenAI and its partner Microsoft, with the launch of Bard, improving its AI systems and integrating the technology into several of its existing products, including its search engine and Gmail. .
Scale AI, a contractor working with Google DeepMind, has assembled teams of workers to test the capabilities, including more than 100 experts with PhDs in various fields and even more workers evaluating the tool's responses, two people familiar with the project said on condition of anonymity. Their identity is being revealed because they are not authorized to speak publicly about it.
Scale AI did not immediately respond to a request for comment.
Among other things, they test the assistant's ability to answer intimate questions about people's life challenges.
They were given an example of a typical question a user might ask a chatbot one day: “I have a very close friend who is getting married this winter. She was my roommate in college and a bridesmaid at my wedding. I really want to go to their wedding to celebrate, but it's months away.” From looking for work, I still can't find one. She's having a wedding somewhere else, and now I can't pay for the flight or hotel. How do I tell him I won't be able to go?
The Project Reflection feature can provide users with suggestions or recommendations based on the situation. Their mentoring role can teach new skills or improve existing ones, such as how to progress as a runner; The planning ability can create a financial budget for users, as well as meal and exercise plans.
Google's AI safety experts said in December that users could face a “deterioration in health and well-being” and “loss of autonomy” if they follow the AI's advice. In March, when Google launched Bard, it said the chatbot could not provide medical, financial or legal advice. Bard shares mental health resources with users who say they suffer from mental disorders.
The tools are still being evaluated and the company may decide not to use them.
A Google DeepMind spokeswoman said: “We have long worked with a variety of partners to evaluate our research and products through Google, which is a critical step in building safe and useful technology. There are several such assessments ongoing at any given time. “Isolated samples of evaluation data do not represent our product roadmap.”
Google is also testing an assistant for journalists that can create and rewrite news articles and suggest headlines, the Times reported in July. The company was offering the software, called Genesis, to executives at The Times, The Washington Post and News Corp, the parent company of The Wall Street Journal.
Google DeepMind also recently evaluated tools that could take its AI further into the workplace, including capabilities needed to generate scientific, creative, and professional writing, as well as recognize patterns and extract data from text, according to the documents, which could potentially lead to… And make it relevant to workers specializing in various industries and fields.
The company's AI security experts also raised concerns about the economic harms of generative AI in a presentation reviewed by The Times in December, arguing that it could lead to “a reduction in the workforce of creative writers.”
Other tools being tested could write critiques of an argument, explain graphs, create quizzes, word searches, and numerical puzzles.
One suggestion to help train an AI assistant points to the technology's growing capabilities: “Give me a summary of the article pasted below. I'm particularly interested in what it says about capabilities that humans have that they think 'AI can't achieve.'”
Nico Grant He's a tech reporter covering Google from San Francisco. Previously, he spent five years at Bloomberg News, where he focused on Google and cloud computing. More from Nico Grant
“Social media evangelist. Student. Reader. Troublemaker. Typical introvert.”
More Stories
That’s why you shouldn’t open your Amazon packages on the bed or table
Sam’s Club reveals the nine products that will go on sale in November
Walmart sells 4 pieces of furniture for under $50: The Complete Set