top of page

Long Term Artificial Intelligence Risks

The second meeting, held in conjunction with the All-Party Parliamentary Group on Artificial Intelligence on the evening of 19th July 2017 and was Chaired by Professor Lord Martin Rees.

Witnesses

​

Dr Shahar Avin, researcher at the Centre for the Study of Existential Risk at the University of Cambridge.

​

​

​

Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford.

​

​

​

Dr Joanna Bryson, researcher at the Centre for Information Technology Policy and Reader in Artificial Intelligence at the University of Bath.

​

​

​

Professor Edward Felten, former Deputy White House CTO, Director of the Center for Information Technology Policy at Princeton University.

Shahar Avin.jpeg
Nick Bostrom.jpg
joanna bryson.jpeg
Professor Edward Felton.jpeg

This meeting considered the question: 

​

​

'How do we make AI safe for humans?'
 

 

Artificial intelligence presents us with many opportunities, but also risks.

 

This panel brought together four global experts on AI, who considered how to maximise the long-term benefits of AI for everyone, and ensure that it is developed safely and responsibly.

 

They offered practical guidelines for improving policy making in this topical, and vitally important, area.

 

In his opening remarks, Lord Rees challenged those present to look beyond the fact that AI was currently a high-profile issue and consider how it will be viewed from the perspective of the 22nd century, a position that at least some in the room stood a chance of one day having.

​

Edward Felten opened the discussion by arguing that while the future of AI is uncertain, the most likely trajectory involves the continued development of 'narrow' AI systems, which are intelligent only within a specific, well defined, domain like driving vehicles or playing chess.

 

This implies that, although potentially dramatic, the development of AI will remain broadly linear without the kind of 'intelligence explosion' or 'technological take-off' that most worries researchers of global catastrophic risk. Instead, the challenges that AI will pose to future generations are going to be small shocks, as it replaces human intelligence across a wider and wider range of domains, but with humans still playing a vital role in coordinating diverse domain specific AI systems, and developing new ones.

 

He went on to propose three main implications of his view for policy-makers:

​

1. Firstly, they should not fear that machines will seize control but rather that people will lose control of machines, by building systems that are so complex and interconnected that we will no longer understand how they work or how to make them secure.

 

2. Secondly, they should not fear that machines will attack humans, but that humans will use machines to attack each other, for instance via Lethal Autonomous Weapons and Cyber warfare.

 

3. Finally, they should not fear that machines will oppress humans, but that humans will build machines that become engines of oppression, such as enhanced surveillance and social control.

 

Fundamentally, therefore, AI is going to raise the stakes on issues that are already of significant public concern. For instance, the evidence is not that AI systems will leave humans with nothing to do but that it will demand ever increasing levels of flexibility and development in the labour market, which many workers will not be able to keep up with. For now, we can still address these challenges while they are growing relatively slowly. Experimenting in this way gives us opportunities to learn for the future, and this is the only way we will ever have the knowledge and experience we need to face the growing problems it will bring.

 

Joanna Bryson gave another perspective on why concerns about an intelligence explosion are unlikely, since

 

 

 

while computation requires time, space, and energy. Thus, even though she did not share Professor Felton’s concern that AI would inevitably remain domain specific, she believed that it was still all too common for people to get hung up on its mysteries and that there would be important limitations on what any AI system could do. 

 

For humans, intelligence is all about translating physical perceptions into physical actions. Losing sight of this fact has led many people to attribute overly idealised properties to intelligence that real-world system cannot instantiate. Hence, the development of AI must be seen in relation to its need for resources, for both inputs and outputs. This raises important questions of sustainability and inequality (or alternatively of ‘security’ and ‘power’).

 

AI will empower humanity to achieve more than it has ever done, for better and worse.

 

However, it will also make individual humans more 'exchangeable' by reducing the importance of individually acquired skills and expertise. This is likely to drive up inequality, which is already much higher than the ideal level for promoting human wellbeing.

 

AI will also fundamentally change the nature of work. Automation may not reduce the number of jobs because, by increasing the productivity of individual workers, it can increase the demand for some labour even whilst replacing other labour. However, it will tend to increase the demand for higher skilled, technically sophisticated jobs that not everyone can perform.

 

Finally, AI will change the nature of our communities. While local issues will always matter to individual people, AI will gradually erode the difference between groups, at least from the perspective of the global economy. This suggests that local issues will be pushed down the policy agenda, which can have significant knock on effects for politics, as we are presently seeing. This would only be exacerbated by certain policy proposals, like a Universal Basic Income, that attempt to solve the distribution problem whilst doing even more to erode the importance of individual people and local communities.

 

Dr Bryson concluded that achieving good outcomes for people from the development of AI means focusing on the people, and not just the technology.

 

We should maintain human and corporate responsibility for all AI products, because our justice system rewards and dissuades humans, not machines, and should not reward companies for poor systems engineering practices by reducing liability for systems they can neither predict nor maintain.

 

Next, Nick Bostrom argued that, despite what previous speakers had said, there was a very real chance that a superior 'General Artificial Intelligence' could emerge in the next few decades, and that, even if we believe this is unlikely, we should not ignore the risks it would pose.

 

As evidence of this, he pointed out that much of the excitement in recent years has centred on developments that allow AI systems to engage with problems that are far wider in scope than was previously possible, such as the deep learning revolution. These rely on designing algorithms that can learn from raw data, without the need for humans to decompose problems into component parts and then define rules for each of them.

​

However, if we take seriously the possibility that in the future we may do more than just repeat past successes in AI development, then there are three great challenges that policy makers must face up to:

 

1. The first is AI safety, or how to keep intelligent systems under human control. If we make algorithms arbitrarily powerful then how can we be sure that they will continue to do what we intended them to? However, if we cannot achieve this level of control, then it is virtually certain that, sooner or later, these systems will harm humanity. This ‘control problem’ is now at the centre of a well-established field of academic study. However, the resources being devoted to it remain considerably lower than they should be, given the potentially existential risk from failing to solve it.

 

2. The second is global coordination. We have a long track record of inventing new technologies that we then use to fight wars, stifle economic competition and oppress people. The more powerful our technologies grow, the more worrying this tendency becomes. Given that we already have nuclear weapons that could destroy all of humanity, the development of an AI arms-race mentality, in which corporations and countries attempt to gain a geopolitical edge via the development of AI, is something that should seriously scare us

 

3. The third challenge is the potential that humans can, or will, harm AI systems themselves. We are building systems of greater and greater complexity and capability; might these deserve some degree of moral protection? We already think that many non-human animals have moral status and can suffer – including quite simple animals like lab mice. Why should we not have the same concern for digital minds? At present we don't even have criteria for when we might start thinking about the interests of intelligent machines, even if these would merely reassure us that we have nothing to worry about. This could be dangerous because computers may deserve moral standing sooner than we think and, if we do not even ask questions about when this could happen, we could unwittingly commit atrocities against them.

 
Finally, Shahar Avin argued that there is no one set of risks from AI. Rather there are many risks that deserve our attention. Some of these are short term risks while others are longer term, usually because they are as yet so unclear that it will take many years of study before people can even understand how to go about managing them. Similarly, some risks relate to the nature of the technology itself and what it can do while others are associated with the powers that this technology will give to human beings and the potential for their misuse. Finally, some of these risks are associated with the development of new technology itself while others relate to its broader systemic effects, such as their impact on the economy and global security.

 

Given this multiplicity of risks, policy makers need to consider the governance of AI across at least four levels:

 

1. First there are the AI algorithms themselves, which take the form of computer software and form the bedrock of technological development. It is very unlikely that regulation at this level will work well, because ideas want to be known and information wants to be free.

 

2. Second, there is the hardware that these algorithms need to run on. This is far easier to regulate because it is unequally distributed around the world, so that countries that possess more computing power are able to exert more control over how it is used. As professor Bryson argued, hardware can be just as important as software to AI. However, at present knowledge about the use to which any given piece of computer hardware is being put is very limited, and we will not be able to regulate well without this crucial information.

 

3. Thirdly, there is the regulation of the data that AI systems need to function. The developments in more generalized AI systems described by Professor Bostrom are all based on the vast increase in this data, without which modern AI systems could not operate. The regulation of data is already a hot topic of debate and has gained some public as well as political traction of late. However, generally it is an area where people feel all governments could be doing better.

 

4. Finally, policy makers might try to regulate, or at least influence, the talent that is needed to create AI systems: how we educate, employ and monitor these people. This seems much more like the bread and butter of government policy, at least in some places. However, at present, in so far as artificial intelligence education and employment is viewed by government at all it is simply from an industrial strategy and growth maximisation perspective, with little attention given to risk awareness or risk management.

 

Here, the UK is undoubtedly well positioned to engage with international norm building around the development of AI. It also has a very important role in educating the next generation of AI engineers and a lot of soft power and cultural capital to draw on. However, this talent is very open to free movement, and if it is not well developed and managed, the UK may not keep this position for long.

​

​

“Intelligence is not an abstraction, but rather a physical process subject to natural laws”

bottom of page