YOU ARE AT:AI-Machine-LearningThe ‘nightmare scenario’ – Big Tech, AI and the ‘end of the...

The ‘nightmare scenario’ – Big Tech, AI and the ‘end of the human race’

As with every tech show at the moment, there was a deal of evangelical fervour at Digital Enterprise Show in Málaga last week about the potential of AI to super-charge enterprises and economies, and maybe to save the planet along the way. But, to its credit, the event also warned about the risk of unfettered AI. And, for all the passengers on this runaway train – which is everyone, except a few man-child monopolists in Big Tech (“Hot damn; I love you guys”) – the message was stark and urgent: get this right now, or get it wrong forever, and watch society fail. 

In particular, an early panel about the “opportunities and risks” of AI convened three members of the new United Nations (UN) advisory board on AI, formed last October, to drive the message home. Wendy Hall, a computer scientist at the University of Southampton, and an old colleague of Tim Berners-Lee during the invention of the worldwide web in the late 1980s, said: “If we have machines that are cleverer than us, which have access to all this data and which can self replicate and make their own decisions, then that is the end of the human race.”

She was referring, actually, to the futuristic concept of artificial general intelligence (AGI), where self-governing machines out-think and out-pace humans – as always imagined in Hollywood as science-fiction and now researched in Silicon Valley as science-fact. Hall said: “If you take it to the extreme, the machines become the masters and we become the slaves – just like in the Matrix. This is a nightmare scenario. I hate to paint the picture… but if our tech companies are determined to build AGI, then why are they doing it without… seriously considering the social impact?”

Hall was joined on the panel by Carme Artigas, a former state secretary for AI in in Spain, now co-chair of the UN’s (‘high-level’) board on AI, and Linghan Zhang, a professor of data law at the China University of Political Science and Law, and another member of the UN board. Zhang said towards the end of the session that at a recent meeting of the UN’s new AI advisors in New York, the group split in two teams to devise how to deal with the both opportunities and risks of AI – and all the men elected to address the opportunities, and all the women volunteered to tackle the risks.

It was anecdotal, and did not go any further; but apropos of the alpha zealotry in big tech and politics, it seemed like a telling aside. And even if gender roles were never explored, the top-down AI power-play was made clear. “These companies… have this sort of religious belief that this is where we should be going,” said Hall, later. The point was to make the case for urgent and collaborative regulation at a global level – including with China, notably, presented here as progressive on fair and proper AI in ways, even if its broader system of governance is anathema to the West.

Again, Hall commented: “There is a lot to learn from the way China manages the internet and AI. That doesn’t mean we have to accept its cultural values. We have different cultural values, and we can regulate in our own ways. We cannot pretend that we all have the same legislation, but we can all be on a base-line at global level – to respect international law and human rights. We have to involve China, just as we [must do with] climate change. It’s such a big power. It’s doing so much in this world. And there’s a lot of good things happening in China.”

Zhang offered a couple of examples of the upside of China’s policy on algorithmically-enhanced internet usage. It has imposed rules around types of content (“like terrorism and pornography, like in other countries”), she explained; but it has also had regulation in place for three years already to limit youth access to “addictive” internet content to a few hours at weekends, say, and to ban usage of speculative analytical data to punish workers (such as delivery drivers) for missed targets. The country is actively engaged with the global community on how to police AI and AGI, she said.

At the same time, she quoted a survey in a magazine that said young people in China are most interested in AI to make friends and money. She said: “Eighty percent of young people in China are not concerned about AI at all. What they care about is how to make money with AI, and how to feel less lonely with AI. It is different from my generation. I was born in the 1980s; my attitude is one of cautious optimism… I would like to see how AI might solve economic and societal problems. But… the young generation in China [is comfortable] to have intimacy with AI.”

She added: “They like the company of AI, and to make friends with AI… They don’t know [life] without [it]… [But] they need guidance from [older] generations… There is a saying in China that a car needs brakes before it goes on the road… [which should be] the attitude to AI.” Which is the point of regulation and the point of the new UN board, and was also the point of the session in Málaga. Artigas, co-chair of the UN board, also chairing the Málaga panel, reminded the event, like it needed saying, that regulation is not the enemy of innovation. 

“All the discussion… [has been that] we’re going to kill innovation… That we cannot regulate AI. Yes we can. [And] we are not regulating technology [anyway]; we are only regulating the high-risk cases. More importantly, the problem is a lack of trust – about what companies and governments do with our data. Will they use it to control us? And the way to create trust is, firstly, with legislation and, secondly, with transparency. Legislation allows for a market to define its rules – which is good for consumers and citizens,” she explained, before raising the spectre of AGI again.

“This dystopian future… is a possibility. [But] it depends on us. The future is not written; we write it every day with our decisions and actions. It is the right time to act collectively… There is urgency; if we don’t do it now, there’s no second chance – to reverse the harm that will be done… [Because AI] is pervasive in every industry [and all of] society; it is the only technology that can evolve without us – which is not the case with electricity [or] atomic energy… Humans [must remain] responsible. A human agent is the key – whether we have a dystopian future or an utopian one.”

Which summed up the message from Málaga very well, even as the trio sought briefly, at the end, to explain the opposite utopia (about AI as the last great hope to meet the UN sustainability goals; just to arrest environmental decline, rather than to reverse it). But most of the rest of the show did that, as every tech show does these days. Really, the session was about the jeopardy, and the need for action; and even the planet-saving promise of properly-regulated AI was undercut, here, by warnings about its planet-sapping energy requirements. 

Putting the earlier quote about the blind faith of Big Tech in context, Hall commented: These companies developing LLMs for whatever reason… have this sort of religious belief that this is where we should be going. And they have the potential to destroy the planet before they destroy us – because of the huge amount of energy they absorb. So it is paradoxical that, on the one hand, we talk about how AI can help with sustainability and, on the other, the development of AI will [kill the planet first]… Which is why we need to put limits on.”

In Málaga, Hall described herself as an “optimist about AI”; but she also fired the clearest warning shots about its potential misuse and tyranny. It was an important panel session, and Hall’s comments, in particular, are worth hearing – and transcribed below for readers. 

“We just assumed people would use [the internet] for good. The internet – the protocols for which were invented 50 years ago this year, in 1974 – have held remarkably… And [it] has changed our whole world. [Its] openness… was really important… but we just assumed people would use it for the good. We didn’t talk about regulation. In fact, in the early days, we were just ignored. Tim put the first website up in 1990, Google emerged around 2000, social media started around 2005/6 – and so for 10 years at least, we were ignored by most companies and most governments. Because nobody could see the potential… Our mission was almost evangelical. 

“The internet and the worldwide web work on the ‘network effect’ – [the idea that] the more people that use it, the more people will use it. Which is its blessing and its curse. The blessing is that 60 percent of the world, maybe higher now, can access the internet… The curse is that, because… [of this] network effect, it was inevitable we would have these monopolies. Because… the apps become the centres of gravity… They become the giant attractors… So as they got bigger and bigger, they got bigger and bigger… That is what happens in networks. We didn’t regulate because we protected the openness [of it], and freedom of speech.

“[But] nobody looks at the packets on the internet in the western world. It’s different in China, which has a different view – and which is not all-bad; the way they do things in China is sometimes a lot better. But we protect our democracies, freedoms, human rights… But we don’t know who to ask to do the censorship. Who should we turn to? Our governments? I don’t think so. The big tech companies? I don’t think so. Should it be up to us? I don’t think so. It’s somehow got to be a mix of all that.

“So I’m coming to AI. The term AI was coined in 1956. I’ve been working in AI for 40 years. It’s been around a long time. It feels like all of a sudden, [even though] it’s actually a research and technological journey, we’re in the era of large language models (LLMs) and the ChatGPTs of this world. [But] AI that is driven on data that is generated [from] the internet. And which companies are driving AI? The big tech companies that we put there because of the network effect. This is the scary thing – the control they will have over us if we don’t regulate and govern it properly.” 

“What scares me is that the big tech companies that all grew on the back of the open internet are the companies that are driving AI. And the companies in the west – DeepMind in the UK, which is owned by Google, and the OpenAIs and the Elon Musks of this world – all say their main aim is to achieve AGI. Nobody really defines what that means, but if you take it literally, it means machines that can outthink us – which can self replicate and self-regulate in whatever way we may or may not train them to, but probably [in ways we will] not, in the worst case. That’s AGI. That’s the vision. It always has been. 

“Way back when the term AI was coined, people were trying to build the human brain. We’ve moved away from that to a certain extent. There is a big difference between machine intelligence and human intelligence. But my thesis is that if we have machines that are cleverer than us, which have access to all this data and which can self-replicate and make their own decisions, then that is the end of the human race. As Steven Hawking said in his last interview before he died, if we can build machines like this, then they will out-evolve us.

“They don’t need to have emotion, or a conscience, or a soul in order to destroy the human race. It’s a different type of intelligence… We are biological; we evolve much more slowly. Machines will evolve faster… It’s like the Daleks in Doctor Who, which couldn’t climb the stairs – well, these robots will climb the stairs. So if you take it to the extreme, the machines become the masters and we become the slaves – just like in the Matrix. This is a nightmare scenario. I hate to paint the picture. I like to talk about the opportunities as well. But if our tech companies are determined to build AGI, then why are they doing it without… seriously considering the social impact?”

“[There are lessons from history]. Think about pocket calculators in the early 1980s. I was teaching maths at the time], and… people said, ‘over my dead body’ – about their usage in classrooms and exams. They said it was going to destroy the human brain because people wouldn’t be able to do mental arithmetic. [And] calculators are an early form of AI. They do arithmetic faster and easier than we can. Of course ‘garbage in / garbage out’. But with the right numbers in, you get the right answers out. Which is not true of chat GPT. I’ve seen very good research papers with mathematical proofs that LLMs, by the way they’re designed, have to make things up. They have to hallucinate; if they don’t know the answer they’re trained to make one up. 

“That’s not true of a calculator. You can trust a calculator. And look what we’ve done with calculators – they’ve changed the whole finance industry… My father was an accountant for a manufacturing company in the 1970s, and everything was done by hand. All those jobs have gone now, and many, many more jobs have been created in the finance world using this early form of AI. And retrospectively, we had to bring-in regulation because of it. We didn’t regulate calculators, but we had to regulate the industry that was driven by calculators and computers as a result. [And really these] grown-up calculators led to the financial crash of 2008 because no one knew who owned the debt. We see this being replicated with AI. We need to learn from history.”

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.