The multi-talented Stephen Fry has published the text of a talk about artificial intelligence he gave as the inaugural “Living Well With Technology” lecture for King’s College London’s Digital Futures Institute, saying:
Machines are capable of bias, hallucination, drift and overfitting on their own, but a greater and more urgent problem in my view is their use, abuse and misuse by the three Cs. They are Countries with their specific ambitions, paranoias, enmities and pride; Corporations with their unaccountable rapacity and of course Criminals. All of them united by one deadly sin: greed. Greed for power, for status, for money, for control. … We are the danger. Our greed. Our enmities, our greed, pride, greed, hatreds, greed and moral indolence. And greed.
That leads him to philosopher Daniel Dennett’s suggestion that AI should be compared to, regulated, and controlled not like the radio, the automobile, the Internet, or even nuclear weapons but like money, which is among humankind’s most foundational and transformative inventions. I’ll be pondering this idea, particularly with our difficulties surrounding cryptocurrency in mind, and I encourage thoughtful people to read Fry’s compellingly written and darkly humorous piece.
I’m aware that the topic of regulating and controlling AI is likely to lead some people to want to ride into battle on their favorite political hobbyhorses. Let’s try to avoid complaining about the specifics of particular parties or governments. Instead, I’d like the discussion to focus on the larger issues and Fry’s closing recitation of the Russell-Einstein Manifesto on nuclear weapons:
We appeal as human beings to human beings: Remember your humanity and forget the rest.
And yes, I will be moderating with a firm hand to keep things constructive.
Jill Lepore has an article in the current New Yorker magazine about the rise of speaking chatbots.
But I notice the small summary at the top, which I think agrees with Stephen Fry’s call for regulation:
Modern scientists have often constrained themselves in accordance with an emerging code of ethics. But nothing holds computer scientists back from developing talking machines that pretend to be human.
I’ve been thinking lately that while there’s been a lot of discussion about the dangers of super intelligent AI, it doesn’t have to be intelligent to be dangerous.
That is, there’s been so many science fiction plots where an AI becomes intelligent and decides to wipe out humanity. But, putting large models “in charge” without the right constraints can cause problems, even if the AI isn’t intelligent at all.
For example, let’s say you put an AI in charge of optimizing traffic flow in New York City, but forget to add “without killing pedestrians”.
Or, you put an AI in charge of managing and optimizing the Texas power grid, and put providing power to the AI’s servers as a top priority, which seems logical. Then there’s a power crisis like Valentine’s Day 2021 and the AI conserves power for itself by shutting off power to hospitals.
Note: This was the premise of James P. Hogan’s novel The Two Faces of Tomorrow.
I was thinking about that, too. What I don’t know is how well the analogy would serve us, given a few big differences.
Most notably, of course, is that everyone understands how nuclear weapons would destroy the world. BOOM! The situation is a rather immediate case of life and death, without nearly the nuance in AI.
AI has many broad uses that individuals could use to their benefit, whereas nuclear weapons are, well, weapons of mass destruction. Arguably, nuclear science could be compared more favorably as a field, but even still, we don’t have personal reactors powering our houses.
Although they’re undoubtedly involved in the manufacture of nuclear weapons, corporations don’t have their own nukes, whereas corporations are the prime drivers of AI. Hence Fry’s contention that greed is behind it all.
Agreed—in these examples and many others, any bad algorithm could cause problems. The main difference I can see is that simpler algorithms could theoretically be debugged more easily than the black box of a neural network.
The real mistake would be to fail to build safeguards into systems.
Absolutely…but remember that in the immediate post nuclear invention era (ie, where we are now with AI) there were all sorts of things that nuclear weapons were going to be used for peacefully — mining, plane and spacecraft propulsion, eg. Nuclear power was a much broader category and equally as optimistic. AI in fifty years might be a godsend or it might be Skynet.
Classic Stephen Fry. Beautifully written, witty, humane and . . . he’s been a serious geek for decades so he doesn’t mischaracterize the tech. Chatting with Marvin Minsky forty years ago, no less.
The warning flags he is waving are absolutely appropriate. I put Ai among the big three threats to us all and it may soon rise up to be the top of the three.
How to control it is the huge question. It has no simple answer and what it may well take is many, many, many answers—as many answers as there are topics it touches.
Stephen Fry, along with Douglas Adams (Hitchhikers Guide to the Galaxy & “Technology is stuff that doesn’t work yet”), is brilliant at predicting the future course (and threats) of technology.
I’m not a futurist, because the ideas read in science fiction and that history repeats, scare me. Regarding AI and Fry’s conclusions is that the truth is, it is all about money. I watched an office user’s face light up when I showed her Co-Pilot (Microsoft’s prelimary AI…which I would have preferred as Cortana 2.0). She was making images (limited in access and content per my institution’s policy) and documents. And using up tokens. Tokens. And then it hit me: AI use or access is addictive! And so like gambling, these AI companies really just want to hook “users” in for charging for prompts. If it makes your work better, they’ll profit. I mean, think about AI and how it will change something like greeting cards. Just have an image prompt and message, send to printer or even just post/send/msg to someone. Greeting card companies either need to get on now, and have their own AI app with access to their own art, and let users make their own…or its doomed.
As for nukes, its like that movie (Darkstar) with the AI warhead planet killer…trying to rationalize its existence is that its only purpose was its target. But what if it could not detonate?
I used AI for images, and for my performance reviews, and for nice letters of thanks or farewell to colleagues leaving. Still not 100% satisfied but then, who is?
Apart from Skynet considerations, one concern is people using their brains less and using AI as a crutch to get through life. Why spend time learning anything or trying – or thinking, when AI can just do it for you. A trivial example is looking at eBay listings. Before AI, someone usually thought about what they were listing, doing some research or imparting some first-hand knowledge. Now, lots of the listings sound like the same regurgitated, non-sensical statements that really don’t say anything at all. AI, whatever it is, could be a useful tool, used in moderation.
My favorite example of this is the Air Force testing out having an AI run a (simulated, thankfully) attack mission.[1] The AI’s goal was to destroy the target, and it was overseen by a human operator to prevent it from making errors, hitting civilians, etc. The problem was that the goal hadn’t been written carefully enough and the AI decided that the operator was the main impediment to it completing its mission, so it killed (in a simulated way) the operator. When they told it not to do that, it blew up (in a simulated way) the communication tower that connected the operator to it.
[1] I’ll note that after this made news, the Air Force denied that it had happened.
After watching the 60 Minutes Investigation of AI some months ago where its original developers stated that we no longer completely understand how it works along with the designation of its errors as hallucinations, which is enough to create concern for me.
Now let’s add some other things to this recipe that centers around Elon Musk:
He is in the process of building an immense supercomputer.
He is looking into restarting some nuclear plants, now shutdown, to provide power for them
He is starting to implant computer chips (NeuroLink) into people’s brains, for the ultimate purpose of having computers controlled with their mind, or perhaps vice versa.
Historically a science fiction movie was essentially made of this in the mid 50’s that is still considered to be one of the best sci-fi movies of all time and a game changer for this genre of movies. It is about an alien civilization on a distant planet that does almost exactly what Elon Musk is working on, which is combining AI with unlimited power and a supercomputer and the disturbing consequences of doing this.
The movie, “Forbidden Planet”, starred Walter Pidgeon, Ann Frances, and Leslie Nielsen and was loosely based on Shakespeare’s, “The Tempest”. It also introduced Robbie the Robot, which was used for many years in other movies and TV shows. It now resides in a Las Vegas atomic energy science museum. It is an incredible movie with some special effects, not CGI, that were years ahead of its time and well worth watching as well as being so relevant right now. It is available on many streaming services. If you choose to watch it, you will then understand why combining the above seriously concerns me and may likely concern you after watching it.
25 years ago I interviewed someone who said his goal was to be neurally connected to the Internet. I did not hire him.
I agree that the machines could easily wipe us out carelessly, as opposed to maliciously.
One thing I haven’t heard mentioned is Kevin Kelly’s book “What Technology Wants,” which argues that we are not and have never been in control of our technology, that it is instead a new form of life which we need to parent rather than control. I found his arguments persuasive, as well as frightening and perhaps hopeful; maybe the machines will be more compassionate than the hairless ground apes.
I just enjoyed reading Adrian Tchaikovsky’s new book, Service Model, about a valet robot trying to figure out its purpose after humans are mostly extinct.
Hopefully it’s not a spoiler, but one of the characters in the book is disappointed that this extinction didn’t happen because robots rebelled and took over. Instead, it was just a gradual decline as humans became less capable, depending on robots for everything. Eventually humans were basically helpless and died off. That felt quite realistic to me. (Just like someone else in this thread mentioned AI threatening the human ability to think.)
Excellent book with actual realistic robots, not fantasy.
I’ve been working in the area of neural nets, autonomy, and adaptable controls for a couple of decades in the aerospace field. Recently, I’ve been intrigued with AI. I’ve been concerned, of course, from the technical side with how Large Language Models (LLMs) are trained. When I look into aerospace, the LLM training needs to be done with competent source data. (Imagine if I’m looking to use AI to assess some part of aerospace and I’m using the “google-verse” of data to train the LLM. Yikes! The misinformation – hallucination – are rampant!) So I’m taking a look at how to use competent source material to train LLMs. (And thus my interest in “Apple Intelligence” as a model for that approach.) So the jury is still out for me on the technical side. And I’m not all that worried about the SkyNet scenario, is a step beyond AI into the world of true autonomy (not just automation of automated systems – which is what we’re implementing now). Autonomy will require neuromorphic machines – just starting. Very tall technical poles!
But dear Stephen Fry is not talking technical tall poles or validation of LLMs. He’s talking about the human side. That’s a much tougher challenge and one that people will need to address. I am scared by some of the talk by some tech CEOs who dismiss the real concerns. I’m concerned by some folks who are abusing it. But I’m also hopeful that we have the capacity to do well. But it will require that people actually have to “read the whole argument!” Anyone who types “tl;dr” should be dismissed from the conversation.
First, I know nothing. But having read Stephen Fry’s address I feel I know much of what I need to know about Ai at this point in my life (I’m in my 80th year). I have been a huge fan of all things Fry for decades, going back to the Hugh Laurie days. Some above said it for me: the brilliant, witty and Humane Stephen Fry. Thanks Adam for sharing this link. And my organic brain is pleased with the brilliant start of our day.