Business

Google boss admits AI dangers ‘keep me up at night’

Google’s chief executive had admitted the potential dangers of AI development “keep me up at night”.

Sundar Pichai said the technology “can be very harmful if deployed wrongly” and backed growing calls for regulation amid concern about its impact on jobs, privacy, and how information is shared online.

“We don’t have all the answers there yet – and the technology is moving fast,” he told CBS’s 60 Minutes programme.

“So does that keep me up at night? Absolutely.”

Google fast-tracked its plans for ChatGPT-style features in its products and services after being caught out by the sudden success of OpenAI’s model, which now has more than 100 million monthly users.

The technology has since been implemented into Microsoft’s Bing search engine, threatening Google’s long-held dominance in the field like never before.

Google launched its direct competitor, Bard, earlier this year – a major step for a company that had been cautious about allowing the public to interact with its AI.

Bard is powered by LaMDA, which can generate prose so human-like that a company engineer last year called it sentient – a claim the company and scientists widely dismissed.

Read more:
AI generated newsreader debuts
ChatGPT-powered Furby ‘wants world domination’

Please use Chrome browser for a more accessible video player


15:37

Can machines have feelings?

Google does not ‘fully understand’ AI’s answers

Like ChatGPT, Bard is a large language model trained on huge amounts of data to interpret text and respond to questions and prompts. However, both have also been shown capable of making factual errors.

Mr Pichai admitted Google still does not “fully understand” why Bard produces certain responses.

“There is an aspect of this which we call, all of us in the field call it… a ‘black box’,” he said.

“You don’t fully understand. And you can’t quite tell why it said this, or why it got this wrong.”

Click to subscribe to the Sky News Daily wherever you get your podcasts

But Mr Pichai said despite his concerns, AI development would only continue to accelerate – and eventually impact “every product across every company”, from healthcare to creative industries.

Google itself has already added Bard features to apps like Docs, and The New York Times reports the company will launch an entirely new search engine powered by the technology.

Mr Pichai said it would be down to governments to figure out how best to regulate it.

Read more:
How AI could change how we search the web
What is GPT-4 and how does it improve upon ChatGPT?

Please use Chrome browser for a more accessible video player


2:16

Will this chatbot replace humans?

How governments are approaching AI

The UK government has said it will take a light approach to regulating AI, saying any attempt to legislate now will quickly be out of date.

But in the US, the White House is inviting public feedback on how AI should be regulated to protect jobs and privacy, while China has already published draft rules outlining its own approach.

Last month, Italy became the first country to outright ban ChatGPT while the country’s data protection authorities investigated its collection of user information.

It came after Elon Musk joined hundreds of AI experts in calling for a pause in the development of the technology, warning that it posed “profound risks to society”.

But Musk has since revealed plans to build his own ChatGPT rival.