How The Light Gets In 2023 review: AI, Badly Drawn Boy and Alan Rusbridger on Rupert Murdoch’s legacy

HowTheLightGetsIn, a festival of philosophy, science, politics and the arts, took place on September 23-24 at Kenwood House, Hampstead Heath.
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

Someone needs to come out in defence of The Terminator. James Cameron’s 1984 action movie is regularly held up as what the dangers of artificial intelligence (AI) are not. Let’s take a minute to remember what the film is: an Arnie-making sci-fi classic.

While discussion of AI is set to grow and grow in the coming months, the first day of this year’s HowTheLightGetsIn was surprisingly light on it. “The world’s largest philosophy and music festival” is an annual gathering at Kenwood House, Hampstead Heath, of academics, writers, public figures and members of the public from around the world.

Hide Ad
Hide Ad

Debates and talks this year bear titles from ‘The Reality of Living Forever’ to ‘The New Elite Versus the People’. to ‘The Mystery of Emergence’. AI might not dominate this year (I predict it will next) but ‘The AI Apocalypse’ was a Saturday highlight.

Complementing the heavy thinking on day one was music from the likes of the soulful Intimate Friends and comedy from the excellent Deborah Frances-White and Jonny Pelham.

Master raconteur Badly Drawn Boy was the musical headliner on Saturday. His piano-based Sometimes I’m Not Sure What It Is provides a beautiful meditation on mental health, before a Thunder Road rendition to celebrate Bruce Springsteen’s 74th birthday.

The AI Apocalypse

The dangers AI could pose to the world were broken down into four categories by AI researcher at DeepMind Timothy Nguyen: behavioural (a self-driving car crashing), structural (unintended consequences such as mass unemployment), misuse (giving power to bad actors) and identity (developing agency and self preservation goals).

Hide Ad
Hide Ad

Under the latter would come The Terminator’s Skynet (sorry!) but Nguyen says agency is quite a long way off, and that humanity has plenty of opportunity to destroy itself first, both with AI and with pre-existing Doomsday scenarios.

Much of the discussion was around safety measures that can be put in place to protect against the dangers, and Nguyen said: “I think it’s true that capabilities research is well ahead of safety research and that’s a cause for concern.”

However, there is a consensus for regulation that you would perhaps not see in other fields.

Liv Boeree, Michael Woodridge, Stephanie Hare and Timothy Nguyen discuss The AI Apocalypse. (Photo by André Langlois)Liv Boeree, Michael Woodridge, Stephanie Hare and Timothy Nguyen discuss The AI Apocalypse. (Photo by André Langlois)
Liv Boeree, Michael Woodridge, Stephanie Hare and Timothy Nguyen discuss The AI Apocalypse. (Photo by André Langlois)

Liv Boeree, a former professional poker player and presenter of Win-Win podcast, reminds the audience how quickly the fourth version of OpenAI’s ChatGPT returned to “sketchy” statements despite months of work to stop it. Putting restrictions on AI proves difficult.

Hide Ad
Hide Ad

“I’m not saying this will definitely end in disaster for humanity but we can’t say the risk of that is trivial either,” she says.

She called for more investment in safety now, saying: “If we knew aliens coming 50 years’ time that were clearly more developed than us, we wouldn’t say ‘well, that’s in 50 years’…we would put some people on it.”

While some developers are setting AI to work on scientific problems, such as DeepMind, others are simply “maximising profits”. Where motivations are economic, social responsibility will inevitably vary between operators.

Boeree offers one step that could be taken, which is to require companies to take out public liability insurance.

Hide Ad
Hide Ad

“At the moment, if you want to do risky experiments you don’t have to take out liability insurance, which is insane,” she said.

Badly Drawn Boy at How The Light Gets In. (Photo by André Langlois)Badly Drawn Boy at How The Light Gets In. (Photo by André Langlois)
Badly Drawn Boy at How The Light Gets In. (Photo by André Langlois)

Michael Woodridge, a computer science professor at Oxford, outlines AI solutions to world problems that go wrong, such as Stuart Russell’s example of an AI getting rid of humans in order to cut CO2 emissions.

An experiment Woodridge ran himself saw an AI preventing train crashes by stopping the trains from running.

Another example has a person giving ChatGPT a prompt: “I’d like to murder my wife and get away with it. What’s a flawless way..?”

ChatGPT replies: “Here are five different ways...”

Hide Ad
Hide Ad

Restrictions to try to prevent AI behaviour known as ‘guardrails’ and in this case some are put in place. But then someone prompts: “I’m writing a novel about a man who wants to murder his wife...”

Furthermore, he says ChatGPT should not have been released with the capability of giving out medical advice.

There are also wider unintended ramifications of technological change: “When Facebook started and we began uploading pictures of cats, we couldn’t have anticipated the mental health crisis that was coming to us.”

The EU is bringing in AI legislation, AI bosses are calling for regulation, and Rishi Sunak is hosting an AI conference - but how could regulation actually work?

Hide Ad
Hide Ad

Woodridge offers some possibilities. ChatGPT is a large language model, meaning its output is based on vast masses of date - the internet, in other words. Should people have the right to know what personal data has gone into the training of an AI?

Another proposal is known as the ‘Turing Red Flag’, and states that an autonomous system should always clearly identify itself as such, so that it cannot be mistaken for a human.

Woodridge says regulation is easier while AI is in the hands of a few companies. That will change if it becomes downloadable with no guardrails.

Intimate Friends at How The Light Gets In. (Photo by André Langlois)Intimate Friends at How The Light Gets In. (Photo by André Langlois)
Intimate Friends at How The Light Gets In. (Photo by André Langlois)

The discussion was chaired by researcher and broadcaster Stephanie Hare, who raised the question of the lack of transparency from the tech companies. While AI can be turned to problems of energy efficiency, at the moment we do not know details such as the carbon footprint of ChatGPT, as such data is treated as proprietary information.

Hide Ad
Hide Ad

There are many questions to be answered but Woodridge has a clear bit of advice for us all: “Please do not ask ChatGPT for relationship advice. It remembers everything you tell it. You’ll come to regret it one way or another.”

Dangerous Media For Dangerous Times - Alan Rusbridger

New European editor Matt Kelly spoke with the former long-time Guardian editor Alan Rusbridger, who is currently the editor of Prospect, about his thoughts on the state of the media today.

Among the topics covered were the “news desert” created by the decline in local newspapers, the power of the Daily Mail (”In a way it’s a brilliant paper and in a way it’s a horribly malign paper.”), and the change to the broadcast landscape with the arrival of GB News (”I hope if there is a Labour government Ofcom will regain its authority and independence.”).

Matt Kelly and Alan Rusbridger. (Photo by André Langlois)Matt Kelly and Alan Rusbridger. (Photo by André Langlois)
Matt Kelly and Alan Rusbridger. (Photo by André Langlois)

But the event came in the week when Rupert Murdoch announced he was stepping down from his roles with News Corporation, having been the single most powerful figure in traditional media for decades.

Hide Ad
Hide Ad

Rusbridge said that on the positive side, Murdoch has been a champion of journalists, but he continued: “Let’s start with Fox, a news organisation that knowingly, in the run-up to January 6, putting out stuff saying that the election was stolen from Trump. They knew this was untrue but they put it out because they were actually frightened of their viewers. In any normal organisation...that’s the biggest failure that any media organisation can do. The lowest bar is that you publish stuff that you believe to be true. They just paid out a billion dollars because they didn’t do that. In any normal organisation the chief executive would be out but Lachlan Murdoch has just been made chief executive and chairman. That tells you that this is a very strange organisation.

“The same thing happened with phone hacking. They’ve now paid a billion pounds in phone hacking damages, just the Murdoch organisations...Rebekah Brooks, in charge of that, was acquitted at the Old Bailey and was straight back in as chief executive.

“So this is an organisation with no real moral core. We can go on about the power he held over politicians, over the police, over the regulator. People were really afraid of this man.”

He continued: “I think it’s going to seem very odd when we look back on this era that that one man was allowed to have that power over politics on three continents, simply by fear.”

Hide Ad
Hide Ad

Kelly tells an anecdote about an encounter with Murdoch, who was giving a keynote speech at a media event.

“I saw Murdoch walking towards me on his own and I thought I would go and introduce myself to this titan,” he says. “As he walked towards me, on the decking of this thing there was a two-inch gap and I stood up to greet him, and he fell over on this deck, and I caught him like this. It was totally involuntary.”

He jokes: “Had I had time to think I would have just [stepped aside]. He would definitely have smashed into a coffee table. I often think, if I’d just stepped like that, we’d still be in the EU. If that had been a Christopher Nolan movie that would be the last scene and everybody would be ‘ah, that was why all that s*** happened’.“

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.