Â鶹´«Ã½AV

Skip to content

Google DeepMind calls for 'responsible' approach to AI amid 'eureka moment'

The surge in people and companies experimenting with AI was triggered by last year's release of ChatGPT, a generative AI chatbot capable of humanlike conversations and tasks.
20230629120644-66abaea6d3179dfb1855383e9e2d7b69a62c2b9817bf6785f5663f9d32ae3705
Colin Murdoch, Google Deep Mind Chief Business Officer, attends the Collision conference in Toronto on Wednesday June 28 2023. THE CANADIAN PRESS/Chris Young

TORONTO — The chief business officer at Google's artificial intelligence research lab says the world is having a "eureka moment" around artificial intelligence, but we have to be responsible with the technology.

The explosion of interest around AI has come from recent advances in the technology that have allowed people to use it with conversational language, rather than the programmers who predominantly dabbled with it before, said Colin Murdoch of Google's DeepMind.

"It's kind of all of a sudden been much more accessible because my mum and dad can do this," he said in an interview with The Canadian Press.

"Anyone can do it."

The surge in people and companies experimenting with AI was triggered by last year's release of ChatGPT, a generative AI chatbot capable of humanlike conversations and tasks that was developed by San Francisco-based OpenAI.

The release kick-started an AI race with other top tech names including Google and its rival product Bard, putting an additional spotlight on DeepMind, which is headquartered in the U.K., but has space in Montreal and Toronto.

Now, everyone from health care companies to oil and gas firms and tech businesses are touting their use of or plans for AI.

But Murdoch said that ubiquity must be met with a careful approach and thoughtful consideration about all of the risks that AI carries.

"The way we think about this is being bold and responsible because it is a balance," he said.

"What we want to make sure of is that we are doing this in a way that enables society to benefit from the incredible potential for this technology, but also the exceptional promise also does need exceptional care, which is why we have to act responsibly and why we have to pioneer responsibly."

But what does responsible AI look like?

At Google, for starters, it's meant being open to criticism at every step of the AI development process.

The company relies on internal and external review committees from the day an idea is generated to when it is unleashed for public use, Murdoch said.

"We're making sure that we have the right oversight of our work, so, for example, we have ethicists sitting alongside policy experts sitting alongside machine learning experts," he said.

"They're pressure testing the work from beginning to end to identify how we maximize the benefit of the work and also address any potential changes we need to make."

Sometimes they prod staff to talk to even more external experts about ramifications, like when they were building AlphaFold and 30 people ranging from biology experts to biosecurity professionals and farmers were consulted.

AlphaFold can predict 3D models of protein structures. Murdoch reckons the technology has mapped all 200 million proteins known to science, saving one billion years of research time in the process because it can determine the structure of a protein in minutes and sometimes even seconds rather than years.

It has been used by researchers at the University of Toronto to identify a drug target for liver cancer.

Aside from ensuring products involve external reviews, Murdoch said responsible AI also takes bias into account. Many say bias crops up in AI because of a lack of diversity and opinions in the building and training phase.

"Making sure that people building, deploying and AI practitioners somehow reflects broader society is very important," he said.

Education and community involvement can help address the bias issue along with the industry being more transparent, so smaller, less resourced startups can learn from heavyweights like Google.

Murdoch’s remarks came on a visit from the U.K. to Toronto, where he spoke at the four-day Collision tech conference Wednesday about how he feels AI is changing the world.

Later in the day, AI pioneer Geoffrey Hinton, who left Google so he could more freely discuss the dangers of AI in May, took the same stage to discuss the giant leaps the technology has made over the last year, which even he didn't predict would come so soon.

Hinton has been deeply concerned about the implications of AI for months and on Wednesday, outlined six harms the technology poses, including bias and discrimination, joblessness, echo chambers, fake news, robots in warfare and existential risk.

While he said the technology could greatly aid in how humanity approaches climate change and medicine, he also cautioned that it might spark changes to careers and even safety.

For example, he suggested the child of Nick Thompson, the Atlantic chief executive interviewing Hinton on stage, pursue plumbing rather than media because of how capable AI has become at completing tasks integral to non-trade-based jobs.

On an existential level, Hinton said he is worried about defence departments building robots for warfare that would necessitate an international convention to stop.

"I think it's important that people understand that this is not just science fiction, it's not just fear mongering," he said.

"It is a real risk that we need to think about, and we need to figure out in advance how to deal with it."

As for Murdoch, he said the world shouldn't focus on one singular risk posed by AI but should instead take a "holistic" approach and remember that we are still at the early stages of this technology's use and integration.

"We're still kind of on the first rung and each rung we step up, we're going to be more powerful and capable."

This report by The Canadian Press was first published June 29, 2023.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks