Global concerns about AI

This article refers to numerous organisations and experts raising concerns about AI:

2 Likes

As this post suggests, this feels a little self-serving on the part of the companies building AI tools. It’s a bit like how the Googles and Facebooks of the world welcome more regulation because they know they can throw money at the problem while upstarts won’t be able to muster the resources to compete.

It also feels like a huge step from fluid text (that may not be accurate) and images (that can’t get hands right) to ā€œa risk of extinction.ā€

Every new technology causes hand-wringing about the future. And while I haven’t gone through examples to decide if this is really true or not, I wonder if the actual negatives of any given technology are actually the ones that crop up in the early days.

1 Like

This was a good read. I didn’t know about ChaosGPT and found interesting that the CEO of OpenAI is already acknowledging the dangers of AI.

The dangers they talk about seem plausible to me - if there actually was an actual AI. It strikes me much more like a warning to those who might be trying to develop such a device.

But the stuff everybody’s talking about today has nothing resembling actual intelligence. They are chatbots that select words and phrases based on a massive database of probabilities to determine what words/phrases are most likely to follow prior words/phrases in the context of a question. But the software has no actual understanding of the question or its response.

The front-end app that parses your natural language questions into what the ML model expects, and parses the ML output into natural language phrases does a good job of fooling people into believing that there’s something more under the covers, but there’s no actual intelligence to a generative language ML model. No more than the ā€œintelligenceā€ in your iPhone’s camera used to separate objects from backgrounds.

2 Likes

Here is an AI researcher’s response:

3 Likes

There was this story today: https://twitter.com/armanddoma/status/1664331870564147200?s=61&t=_KrpZj6ObFtYllSikqPqvw

In case you don’t want to open twitter, it says,

The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him.

Plus this:

Interesting story. But if this is true, the Air Force’s underlying failure was not having programmed it to never attack a friendly. That seems a ā€œclassicalā€ error, unrelated to AI. As somebody who’s received a couple research grants from the Federal Government related to AI and ML, I’m tempted to suggest a little more ā€œIā€ and little less ā€œAā€ should be applied when coding for lethal drones. :wink:

1 Like

I doubt that was really ā€œAIā€, but rather a poor attempt at faux-AI for the bosses, described by an Asimov lookalike story without the Rules of Robotics.

2 Likes

Isn’t that Rule #1 of Robotics?

1 Like

The one thing about these systems that concern me is that they do not run by the normal branching algorithms that are programmed. Sometimes you cannot tell why they make certain decisions because the reasons are inside somewhere.

That tweet no longer exists. Believe what you will about that.

Personally, this reads more like a fictional account than something an actual ML expert would write. It is ascribing motivations to algorithms that have no such thing. And if there is something that is truly AI, as the text is implying, it would be so massively classified that nobody would be writing about it.

That having been said, ML applications do what they’re trained for, and if you train them badly, they aren’t always going to do what you expect.

And to always obey the chain of command.

While it might be noble for human soldiers to question authority (and risk punishment for disobedience), that is 100% unacceptable for any piece of machinery.

They are obeying the algorithms, but neural nets are extremely complicated algorithms that often lead to unexpected behavior.

But this is not unique to neural nets. Any non-trivial piece of modern software is going to be too complex to completely test. That’s why all non-trivial software has bugs - sometimes catastrophic ones. It’s not because of AI, but because this is an incredibly complicated system that is going to require an incredible amount of testing before it can be trusted to work reliably.

As for the story reported here, the article doesn’t say if this is an initial proof-of-concept hack-build or something that’s been in development for years or something that’s going to be deployed soon. The behavior described may be expected for something in its early stages, but should result in project cancellation if this is something close to deployment.

2 Likes

Posted later:

https://twitter.com/ArmandDoma/status/1664600937564893185?cxt=HHwWgsCzhffu7JkuAAAA

And: https://twitter.com/ArmandDoma/status/1664599530933739520?cxt=HHwWgICz6Yad7JkuAAAA

So. Fiction.

1 Like

Either that…or it’s real and he spoke out of turn, and his bosses reamed him and told him to retract it because although it was a mistake they don’t want public opinion to get a project they really don’t want killed…killed. Having spent my entire working life in DoD active duty Navy and then as a contractor…the latter is not out of the question.

I do think they would basically follow Asimov’s 3 rules of robotics in an actual AI device…to completely prevent an attack on friendlies…but as noted all software has bugs and an actual AI as opposed to the so called AIs we have in ChatGPT and other software would be far, far more complex.

While Colonels…and Admirals and Generals…can and do talk put their rear end and do or say dumb things…a lot of the time there is a kernel of truth buried somewhere in there…and many people have accidentally or deliberately disclosed highly classified information…and on the other end of the spectrum they classify things that shouldn’t be…I could give examp,es but I don’t know if they ever fixed some or all of them

1 Like

If that’s real, then the Air Force has a level of AI tech that is decades beyond what every expert in academia and corporate America have, because that device could somehow understand its own control mechanism and not just operate based on the sensors it was programmed to read.

There have been many cases where a weapons test needed to be aborted in order to prevent the operators from serious injury or death. But one involving an actual self-aware machine that makes a deliberate choice to disobey? No, I’m sorry. It’s going to take a lot more than a retracted anecdote to convince me that that level of tech exists anywhere in the world today.

1 Like

It would not surprise me if the Air Force had advanced AI…remember…I worked on that biz for a long time and DoD had many things that we mortals think are tears off. However…if they do…it is likely still in early development and 5-10 years from being ready…and them doing some simulation tests isn’t out of bounds…but the article as originally stated seems like early development. I have no idea really…but even when I was in that business the armed forces had things that at the time were considered pie in the sky. Maybe the Colonel misspoke…maybe he spoke truth out of bounds, and there’s no real way to tell. My program had a half dozen things that most people would have thought not technically feasible…but they existed and were deployed. I’m just saying that the Colonel’s ā€˜I misspoke’ walk back doesn’t necessarily mean he lied or anything else.

I would be more inclined to believe personally that he knows a little and spilled the beans inadvertently…but doesn’t really know the actual facts about the incident. One can easily speculate oneself into things tha5 are mostly correct without any actual knowledge…for example Tom Clancy’s original novel and another one entitled Blind Man’s Bluff but some other folks.

I don’t believe we are anywhere close to self-sentient AI at this point…it’s a matter of computer horsepower in a small enough package to be militarily significant since tha5 needs the ability to move as needed in some fashion and while chips are way better than 20 years ago they’re not AFAIK good enough yet.

1 Like

Can a weapon obey Asimov’s Law of Robotics?

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

It doesn’t say anything about, ā€œexcept a human being designated as an enemy human beingā€.

2 Likes

Ars Technica has generally good coverage of ā€˜AI’. It’s often worth at least scanning the comments.

Bruce Schneier signed the group statement, and is bemused that everyone has latched on to the ā€˜extinction’ word:

ā€œI actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.ā€

https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html

As far as our extinction goes, nothing about the current so called AI is going to do more than somewhat accelerate the vast processes we’ve already set in motion, including the ongoing mass extinction event.

Agricultural soils around the world are critically degraded.
Biodiversity in agriculture is so low that a few more diseases could mostly wipe out not only some major crop plants and livestock, but crops such as rubber (there are no synthetic rubbers that can used as replacements in aircraft tires.) Just three companies control about 90% of the global seed market. Clean water is endangered from the mining of aquifers, pollution, and the droughts. And the list goes on…

Rob Dunn is a biodiversity specialist and an excellent writer. Two strong relevant recommendations are ā€œNever Out of Seasonā€ and ā€œA Natural History of the Futureā€. They aren’t all gloom and doom, mostly they’re just fascinating. His other six books are great too.

David Montgomery, a sedimentation geologist, has a wonderful somewhat older book (2009), ā€œDirt: The Erosion of Civilizationsā€ that gives a great historical perspective of the importance of soils and agriculture to civilizations.

2 Likes

(Spoiler alert) Andy Weir’s novel Artemis has an interesting plot about a self-aware AI on a Moon base that takes sides in a conflict.

Having an extremely similar background to yours, including branch of service and post-retirement employment, I have to say that I respectfully disagree with your assessment; I was involved in many projects but in nearly 40 years I never caught the slightest whiff of what you allude to.

The supposed statement read like a ChatGPT release, and the lack of any references or links makes me tend towards dismissing it out of hand.

3 Likes

I’m good with that…obviously we had different careers in different specialties and whatnot so different opinions are certainly fine. What the Colonel really knows, did, or saw isn’t clear at all from either the original article or the retraction/walk back/correction/thought experiment.

3 Likes