How will Google solve its AI conundrum?

In the AI ​​arms race that has just broken out in the tech industry, Google, where much of the latest technology was invented, should be well positioned to be one of the big winners.

There’s just one problem: With politicians and regulators breathing down its neck and a hugely profitable business model to defend, the Internet search giant may be hesitant to use many of the weapons at its disposal.

Microsoft threw down a direct challenge to the search giant this week when it sealed a multi-billion dollar investment in AI research firm OpenAI. The move comes less than two months after the release of OpenAI’s ChatGPT, a chatbot that answers queries with paragraphs of text or code, hinting at how generative AI could one day replace internet searching.

With the right to commercialize OpenAI’s technology, Microsoft executives have made no secret of their goal to use it to challenge Google, rekindling an old rivalry that has simmered since Google won the search wars a decade ago.

DeepMind, the London research firm that Google bought in 2014, and Google Brain, an advanced research division at its Silicon Valley headquarters, have long given the search firm one of its strongest footholds in AI.

Recently, Google has been breaking ground with different variations of the so-called generative AI that underpins ChatGPT, including AI models capable of telling jokes and solving math problems.

One of its most advanced language models, known as PaLM, is a general model that is three times larger than GPT, the AI ​​model that underlies ChatGPT, based on the number of parameters the models are trained on.

Google’s chatbot LaMDA or Language Model for Dialogue Applications can talk to users in natural language in the same way as ChatGPT. The company’s engineering teams have been working for months to integrate it into a consumer product.

Despite the technical advances, most of the latest technology is still only the subject of research. Google’s critics say it is tied down by its hugely profitable search business, which discourages it from introducing generative AI into consumer products.

Microsoft plans to use OpenAI’s technology throughout its products and services © Lionel Bonaventure/AFP via Getty Images

Providing direct answers to queries rather than simply directing users to suggested links would result in fewer searches, said Sridhar Ramaswamy, a former Google executive.

That has left Google facing “a classic innovator’s dilemma” — a reference to the book by Harvard Business School professor Clayton Christensen that sought to explain why industry leaders often fall prey to fast-moving upstarts. “If I was the one running a $150 billion business, I’d be scared of this thing,” Ramaswamy said.

“We have long been focused on developing and implementing artificial intelligence to improve people’s lives. We believe that artificial intelligence is foundational and transformative technology that is incredibly useful for individuals, businesses and societies,” Google said. However, the search giant would “need to consider the wider societal implications these innovations may have”. Google added that it would announce “more experiences remotely soon”.

Although it leads to fewer searches and lower revenue, the spread of artificial intelligence can also cause a jump in Google’s costs.

Ramaswamy calculated that, based on OpenAI’s pricing, it would cost $120 million to use natural language processing to “read” all the web pages in a search index and then use this to generate more direct answers to the questions people type into a search engine. Analysts at Morgan Stanley, meanwhile, estimated that answering a search query using language processing costs about seven times as much as a regular Internet search.

The same considerations could keep Microsoft from a radical overhaul of its Bing search engine, which generated more than $11 billion. in turnover last year. But the software company has said it plans to use OpenAI’s technology throughout its products and services, potentially leading to new ways for users to be presented with relevant information while inside other applications, reducing the need to go to a search engine.

A number of former and current employees close to Google’s AI research team say the biggest constraints on the company’s release of AI have been concerns about potential damage and how they would affect Google’s reputation, as well as an underestimation of competitors.

“I think they were asleep at the wheel,” said a former Google AI researcher who now runs an AI company. “Frankly, everyone underestimated how language models will disrupt search.”

These challenges are exacerbated by the political and regulatory concerns caused by Google’s growing power, as well as the greater public scrutiny of the industry leader in the adoption of new technologies.

According to a former Google executive, company executives became concerned more than a year ago that sudden advances in AI capabilities could lead to a wave of public concern about the implications of such a powerful technology in the hands of a company. Last year, it appointed former McKinsey executive James Manyika as a new senior vice president to advise on the broader social implications of the new technology.

Generative AI, which is used in services like ChatGPT, is inherently prone to giving wrong answers and can be used to produce misinformation, Manyika said. Speaking to the Financial Times just days before ChatGPT was released, he added: “That’s why we’re not rushing to put these things out in the way that people might have expected us to.”

But the huge interest that ChatGPT has generated has increased the pressure on Google to match OpenAI more quickly. That has left it with the challenge of showing its AI prowess and integrating it into its services without damaging its brand or provoking a political backlash.

“For Google, it’s a real problem if they write a sentence with hate speech in it and it’s close to the Google name,” said Ramaswamy, a co-founder of search start-up Neeva. Google is held to a higher standard than a start-up that could argue its service was merely an objective overview of content available on the Internet, he added.

The search firm has come under fire in the past for its handling of AI ethics. In 2020, when two prominent AI researchers left in contentious circumstances after objecting to a research paper assessing the risks of language-related AI, a furor erupted over Google’s stance on the ethics and safety of its AI technologies.

Such events have left it under greater public scrutiny than organizations like OpenAI or open source alternatives like Stable Diffusion. The latter, which generates images from text descriptions, has had several security problems, including the generation of pornographic images. Its security filter can be easily hacked, according to AI researchers, who say the relevant lines of code can be deleted manually. Its parent company, Stability AI, did not respond to a request for comment.

OpenAI’s technology has also been abused by users. In 2021, an online game called AI Dungeon licensed GPT, a text generation tool, to create choose-your-own storylines based on individual user prompts. Within a few months, users were generating gameplay involving child sexual abuse, among other disturbing content. OpenAI eventually loaned the company to implement better moderation systems.

OpenAI did not respond to a request for comment.

If something similar had happened at Google, the backlash would have been far worse, said a former Google AI researcher. With the company now facing a serious threat from OpenAI, they added, it was unclear whether anyone at the company was ready to take on the responsibility and risks of releasing new AI products faster.

However, Microsoft faces a similar dilemma in terms of how to use the technology. It has tried to paint itself as more responsible in its use of artificial intelligence than Google. OpenAI, meanwhile, has warned that ChatGPT is prone to inaccuracy, making it difficult to integrate the technology in its current form into a commercial service.

But in the most dramatic demonstration yet of an AI force sweeping the tech world, OpenAI has signaled that even entrenched powers like Google may be at risk.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
%d bloggers like this: