The question of conscious artificial intelligence dominating future humanity is not the most pressing issue we face today, says Allan Dafoe of the Center for the Governance of AI at Oxford's Future of Humanity Institute.
Watch this video on Big Think and answer the questions with your tutor.
What does Allan Dafoe say that AI is likely to be?
What is "the issue" with AI according to Allan?
Who is Allan Dafoe?
So why is it so important for us to _______ artificial intelligence?
What is incredible in many ways according to Allan?
Virtually everything that we have to be _________ is a product of human intelligence and human cooperation.
AI isn't the first technology that ________has had to grapple with how to govern.
What does Allan say will happen if we govern AI well?
What does he say the problem is if we don't govern AI well?
The way we ______________ of AI is crucial, possibly to the survival of our species.
What has Allan been reading up on?
So this is something I'm thinking about, because I think we're at a new _________ moment to, as a collective, come to an __________ about what are the futures that we don't want and what are the futures that we do want.
ALLAN DAFOE: AI is likely to be a profoundly transformative general purpose technology that changes virtually every aspect of society, the economy, politics, and the military. And this is just the beginning. The issue doesn't come down to consciousness or "Will AI want to dominate the world or will it not?" That's not the issue. The issue is: "Will AI be powerful and will it be able to generate wealth?" It's very likely that it will be able to do both. And so just given that, the governance of AI is the most important issue facing the world today and especially in the coming decades.
My name is Allan Dafoe, I am the director of the Center for the Governance of AI at the Future of Humanity Institute at University of Oxford. The core part of my research is to think about the governance problem with respect to AI. So this is the problem of how the world can develop AI in a way that maximizes the benefits and minimizes the risks.
NARRATOR: So why is it so important for us to govern artificial intelligence? Well, first, let's just consider how natural human intelligence has impacted the world on its own.
DAFOE: In many ways it's incredible how far we've gone with human intelligence. This human brain, which had all sorts of energy constraints and physical constraints, has been able to build up this technological civilization, which has produced cellphones and buildings, education, penicillin, and flight. Virtually everything that we have to be thankful for is a product of human intelligence and human cooperation. With artificial intelligence, we can amplify that and eventually extend it beyond our imagination. And it's hard for us to know now what that will mean for the economy, for society, for the social impacts and the possibilities that it will bring.
NARRATOR: AI isn't the first technology that our society has had to grapple with how to govern. In fact, many technologies like cars, guns, radio, the internet are all subject to governance. What sets AI apart is the kind of impact it can have on society and on every other technology it touches.
DAFOE: So if we govern AI well, there's likely to be substantial advances in medicine, transportation, helping to reduce global poverty and [it will] help us address climate change. The problem is if we don't govern it well, it will also produce these negative externalities in society. Social media may make us more lonely, self-driving cars may cause congestion, autonomous weapons could cause risks of flash escalations and war or other kinds of military instability. So the first layer is to address these unintended consequences of the advances in AI that are emerging. Then there's this bigger challenge facing the governance of AI, which is really the question of where do we want to go?
NARRATOR: The way we structure our governance of AI is crucial, possibly to the survival of our species. When we consider how impactful this technology can be, any system that governs its use must be carefully constructed.
DAFOE: There are many examples where a society has stumbled into very harmful situations—World War I perhaps being one of the more illustrative ones—where no one leader really wanted to have this war but, nevertheless, they were bound by the structure of their system in a way that led them into this conflict. So this is what I think we need to worry about. It's an incredibly hard problem, you don't want to make overly hard rules at the beginning because that can overly bind the future, right? You want to allow the future to have their own freedom and also to improve the institutions when they have more information and are better educated. So recently I've been reading up on constitutional design. I'm fascinated by this phenomenon of humans coming together and articulating what's the framework in which they want to live into the future. So this is something I'm thinking about, because I think we're at a new constitutional moment to, as a collective, come to an understanding about what are the futures that we don't want and what are the futures that we do want. Humanity has this wonderful opportunity that we haven't had throughout history. That opportunity is the chance to decide what our future can be. If we overcome our sort of parochial differences and interests and recognize that we are at this rare moment in history, when humanity has enough commonality, we have enough common vision that if we want we can build something together, a shared institution for the future.