This article refers to numerous organisations and experts raising concerns about AI:
As this post suggests, this feels a little self-serving on the part of the companies building AI tools. Itās a bit like how the Googles and Facebooks of the world welcome more regulation because they know they can throw money at the problem while upstarts wonāt be able to muster the resources to compete.
It also feels like a huge step from fluid text (that may not be accurate) and images (that canāt get hands right) to āa risk of extinction.ā
Every new technology causes hand-wringing about the future. And while I havenāt gone through examples to decide if this is really true or not, I wonder if the actual negatives of any given technology are actually the ones that crop up in the early days.
This was a good read. I didnāt know about ChaosGPT and found interesting that the CEO of OpenAI is already acknowledging the dangers of AI.
The dangers they talk about seem plausible to me - if there actually was an actual AI. It strikes me much more like a warning to those who might be trying to develop such a device.
But the stuff everybodyās talking about today has nothing resembling actual intelligence. They are chatbots that select words and phrases based on a massive database of probabilities to determine what words/phrases are most likely to follow prior words/phrases in the context of a question. But the software has no actual understanding of the question or its response.
The front-end app that parses your natural language questions into what the ML model expects, and parses the ML output into natural language phrases does a good job of fooling people into believing that thereās something more under the covers, but thereās no actual intelligence to a generative language ML model. No more than the āintelligenceā in your iPhoneās camera used to separate objects from backgrounds.
Here is an AI researcherās response:
There was this story today: https://twitter.com/armanddoma/status/1664331870564147200?s=61&t=_KrpZj6ObFtYllSikqPqvw
In case you donāt want to open twitter, it says,
The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the droneāand so the drone decided that the human operator was an obstacle to its missionāand attacked him.
Plus this:
Interesting story. But if this is true, the Air Forceās underlying failure was not having programmed it to never attack a friendly. That seems a āclassicalā error, unrelated to AI. As somebody whoās received a couple research grants from the Federal Government related to AI and ML, Iām tempted to suggest a little more āIā and little less āAā should be applied when coding for lethal drones.
I doubt that was really āAIā, but rather a poor attempt at faux-AI for the bosses, described by an Asimov lookalike story without the Rules of Robotics.
Isnāt that Rule #1 of Robotics?
The one thing about these systems that concern me is that they do not run by the normal branching algorithms that are programmed. Sometimes you cannot tell why they make certain decisions because the reasons are inside somewhere.
That tweet no longer exists. Believe what you will about that.
Personally, this reads more like a fictional account than something an actual ML expert would write. It is ascribing motivations to algorithms that have no such thing. And if there is something that is truly AI, as the text is implying, it would be so massively classified that nobody would be writing about it.
That having been said, ML applications do what theyāre trained for, and if you train them badly, they arenāt always going to do what you expect.
And to always obey the chain of command.
While it might be noble for human soldiers to question authority (and risk punishment for disobedience), that is 100% unacceptable for any piece of machinery.
They are obeying the algorithms, but neural nets are extremely complicated algorithms that often lead to unexpected behavior.
But this is not unique to neural nets. Any non-trivial piece of modern software is going to be too complex to completely test. Thatās why all non-trivial software has bugs - sometimes catastrophic ones. Itās not because of AI, but because this is an incredibly complicated system that is going to require an incredible amount of testing before it can be trusted to work reliably.
As for the story reported here, the article doesnāt say if this is an initial proof-of-concept hack-build or something thatās been in development for years or something thatās going to be deployed soon. The behavior described may be expected for something in its early stages, but should result in project cancellation if this is something close to deployment.
Posted later:
https://twitter.com/ArmandDoma/status/1664600937564893185?cxt=HHwWgsCzhffu7JkuAAAA
And: https://twitter.com/ArmandDoma/status/1664599530933739520?cxt=HHwWgICz6Yad7JkuAAAA
So. Fiction.
Either thatā¦or itās real and he spoke out of turn, and his bosses reamed him and told him to retract it because although it was a mistake they donāt want public opinion to get a project they really donāt want killedā¦killed. Having spent my entire working life in DoD active duty Navy and then as a contractorā¦the latter is not out of the question.
I do think they would basically follow Asimovās 3 rules of robotics in an actual AI deviceā¦to completely prevent an attack on friendliesā¦but as noted all software has bugs and an actual AI as opposed to the so called AIs we have in ChatGPT and other software would be far, far more complex.
While Colonelsā¦and Admirals and Generalsā¦can and do talk put their rear end and do or say dumb thingsā¦a lot of the time there is a kernel of truth buried somewhere in thereā¦and many people have accidentally or deliberately disclosed highly classified informationā¦and on the other end of the spectrum they classify things that shouldnāt beā¦I could give examp,es but I donāt know if they ever fixed some or all of them
If thatās real, then the Air Force has a level of AI tech that is decades beyond what every expert in academia and corporate America have, because that device could somehow understand its own control mechanism and not just operate based on the sensors it was programmed to read.
There have been many cases where a weapons test needed to be aborted in order to prevent the operators from serious injury or death. But one involving an actual self-aware machine that makes a deliberate choice to disobey? No, Iām sorry. Itās going to take a lot more than a retracted anecdote to convince me that that level of tech exists anywhere in the world today.
It would not surprise me if the Air Force had advanced AIā¦rememberā¦I worked on that biz for a long time and DoD had many things that we mortals think are tears off. Howeverā¦if they doā¦it is likely still in early development and 5-10 years from being readyā¦and them doing some simulation tests isnāt out of boundsā¦but the article as originally stated seems like early development. I have no idea reallyā¦but even when I was in that business the armed forces had things that at the time were considered pie in the sky. Maybe the Colonel misspokeā¦maybe he spoke truth out of bounds, and thereās no real way to tell. My program had a half dozen things that most people would have thought not technically feasibleā¦but they existed and were deployed. Iām just saying that the Colonelās āI misspokeā walk back doesnāt necessarily mean he lied or anything else.
I would be more inclined to believe personally that he knows a little and spilled the beans inadvertentlyā¦but doesnāt really know the actual facts about the incident. One can easily speculate oneself into things tha5 are mostly correct without any actual knowledgeā¦for example Tom Clancyās original novel and another one entitled Blind Manās Bluff but some other folks.
I donāt believe we are anywhere close to self-sentient AI at this pointā¦itās a matter of computer horsepower in a small enough package to be militarily significant since tha5 needs the ability to move as needed in some fashion and while chips are way better than 20 years ago theyāre not AFAIK good enough yet.
Can a weapon obey Asimovās Law of Robotics?
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
It doesnāt say anything about, āexcept a human being designated as an enemy human beingā.
Ars Technica has generally good coverage of āAIā. Itās often worth at least scanning the comments.
Bruce Schneier signed the group statement, and is bemused that everyone has latched on to the āextinctionā word:
āI actually donāt think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear warāwhich is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.ā
https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html
As far as our extinction goes, nothing about the current so called AI is going to do more than somewhat accelerate the vast processes weāve already set in motion, including the ongoing mass extinction event.
Agricultural soils around the world are critically degraded.
Biodiversity in agriculture is so low that a few more diseases could mostly wipe out not only some major crop plants and livestock, but crops such as rubber (there are no synthetic rubbers that can used as replacements in aircraft tires.) Just three companies control about 90% of the global seed market. Clean water is endangered from the mining of aquifers, pollution, and the droughts. And the list goes onā¦
Rob Dunn is a biodiversity specialist and an excellent writer. Two strong relevant recommendations are āNever Out of Seasonā and āA Natural History of the Futureā. They arenāt all gloom and doom, mostly theyāre just fascinating. His other six books are great too.
David Montgomery, a sedimentation geologist, has a wonderful somewhat older book (2009), āDirt: The Erosion of Civilizationsā that gives a great historical perspective of the importance of soils and agriculture to civilizations.
(Spoiler alert) Andy Weirās novel Artemis has an interesting plot about a self-aware AI on a Moon base that takes sides in a conflict.
Having an extremely similar background to yours, including branch of service and post-retirement employment, I have to say that I respectfully disagree with your assessment; I was involved in many projects but in nearly 40 years I never caught the slightest whiff of what you allude to.
The supposed statement read like a ChatGPT release, and the lack of any references or links makes me tend towards dismissing it out of hand.
Iām good with thatā¦obviously we had different careers in different specialties and whatnot so different opinions are certainly fine. What the Colonel really knows, did, or saw isnāt clear at all from either the original article or the retraction/walk back/correction/thought experiment.