Perception into Google DeepMind’s strategy to AI safety

This article features an interview with Lila Ibrahim, COO of Google DeepMind. Ibrahim will speak TNW conference, which takes place in Amsterdam on June 15th and 16th. If you want to catch the event (and say hello to our editorial team!), we have something special for our loyal readers. Use the promo code READ-TNW-25 and get 25% off your TNW Conference Business Pass. See you in Amsterdam!

AI Security has become a mainstream concern. The rapid development of tools like ChatGPT and deepfakes has raised fears of job losses, disinformation – and even annihilation. Last month, a warning That artificial intelligence represented a “risk of extinction” made headlines in newspapers worldwide.

The warning came in a statement signed by more than 350 industry heavyweights. Among them was Lila Ibrahim, the chief operating officer of Google DeepMind. As head of the groundbreaking AI lab, Ibrahim has a front-row view of the threats — and opportunities.

From solving complex games to unveiling the structure of the protein universe, DeepMind has made some of the most remarkable breakthroughs in this field.

The company’s ultimate mission is to create artificial general intelligence, a nebulous concept that broadly refers to machines with human-level cognitive abilities. It’s a visionary ambition that needs to remain grounded in reality – and that’s where Ibrahim comes in.

In 2018, Ibrahim was appointed as DeepMind’s first COO. Her role oversees business operations and growth with a focus on building AI responsibly.

“New and emerging risks – such as bias, security and inequality – should be taken extremely seriously,” Ibrahim told TNW via email. “At the same time, we want to make sure we’re doing our utmost to maximize the beneficial outcomes.”

Before joining DeepMind, Ibrahim was COO of Coursera, where she helped open up access to education. Image credit: Google DeepMind

Ibrahim devotes much of his time to ensuring that the company’s work has a positive impact on society. Ibrahim highlighted four arms of this strategy.

1. The scientific method

To uncover the building blocks of advanced AI, DeepMind draws on the scientific method.

“That means constructing and testing hypotheses and stress-testing our approach and results through peer review,” says Ibrahim. “We believe that the scientific approach is the right one for AI as the roadmap for building advanced intelligence is still unclear.”

2. Multidisciplinary teams

DeepMind uses various systems and processes to direct its research into the real world. An example is an internal audit committee.

The multidisciplinary team consists of machine learning researchers, ethicists, security experts, engineers, security enthusiasts and policy experts. At regular meetings, they discuss ways to expand the utility of the technology, changes in research areas, and projects that need further external advice.

“A multidisciplinary team with unique perspectives is a critical ingredient in building a safe, ethical, and inclusive AI-powered future that benefits us all,” says Ibrahim.

3. Common Principles

To guide the company’s AI development, DeepMind has created a clear set of shared principles. The companys operating principlesFor example, define the lab’s obligation to mitigate risk, while also specifying what it does not want to track – autonomous weapons, for example.

“They also codify our goal of prioritizing broad benefits,” says Ibrahim.

4. Advice from external experts

One of Ibrahim’s main concerns is representation. AI has frequent increased prejudiceparticularly against marginalized groups who tend to be under-represented both in the training data and in the teams building the systems.

To mitigate these risks, DeepMind works with outside experts on issues such as bias, persuasion, biosecurity, and responsible use of models. The company also works with a wide range of communities to understand the impact of technology on them.

“This feedback allows us to refine and retrain our models to make them more suitable for a broader audience,” says Ibrahim.

The engagement has already delivered impressive results.

The business case for AI security

In 2021, DeepMind has solved one of biology’s greatest challenges: the problem of protein folding.

Using an AI program called AlphaFold, the company has predicted the 3D structures of almost every known protein in the universe – some 200 million in all. Scientists believe the work could dramatically accelerate drug development.

“AlphaFold is the unique and meaningful advance in life sciences that demonstrates the power of AI,” said Eric Topol, director of the Scripps Research Translational Institute. “Determining the 3D structure of a protein used to take many months or years, now it only takes seconds.”

Image credits: DeepMindAlphaFold predicts the 3D structure of a protein based on its amino acid sequence. Photo credit: DeepMind

AlphaFold’s success has been guided by a variety of outside experts. IAt the first stages of work, DeepMind examined a number of important questions. How could AlphaFold accelerate biological research and applications? What could the unintended consequences be? And how could progress be shared responsibly?

In search of answers, DeepMind sought input from over 30 leaders in fields ranging from biosecurity to human rights. Their feedback guided DeepMind’s strategy for AlphaFold.

In one example, DeepMind initially considered omitting predictions for which AlphaFold had low confidence or high prediction uncertainty. However, the external experts recommended maintaining these forecasts in the publication.

DeepMind followed her advice. As a result, AlphaFold users now know that low system confidence in a predicted structure is a good indication of an inherently disordered protein.

Scientists around the world are reaping the rewards. In February, DeepMind announced that the protein database was now being used by over 1 million researchers. Her work addresses major global challenges Development of malaria vaccines To Fighting Plastic Pollution.

“Now you can look up a 3D structure of a protein almost as easily as a keyword search on Google – this is science at digital speed,” says Ibrahim.

Responsible AI also requires a diverse talent pool. DeepMind is collaborating with academia to expand the pipelineCommunity groups and charities for support underrepresented communities.

The motivations are not exclusively altruistic. Closing the skills gap will bring more talent to DeepMind and the entire technology sector.

As AlphaFold has shown, responsible AI can also accelerate scientific progress. And with growing public concern and regulatory pressure, the business case just keeps getting stronger.

To hear more from Lila Ibrahim use the promo code READ-TNW-25 and get 25% off your TNW Conference Business Pass.

Comments are closed.