• Frank's Lesson's Contest

    We want to see your skills! Post a video between now and November 1st showing what you've learned from Frank's lessons and 3 people will be selected to win a free shirt. Good luck everyone!

    Create a channel Learn more
  • Having trouble using the site?

    Contact support

Air Force Simulation Sees A.I.-Enabled Drone Turn on U.S. Military, ‘Kill’ Operator

PatMiles

Private
Full Member
Minuteman
Feb 25, 2017
1,631
4,438

UNSPECIFIED, UNSPECIFIED - JANUARY 07: A U.S. Air Force MQ-1B Predator unmanned aerial vehicle (UAV), carrying a Hellfire air-to-surface missile lands at a secret air base in the Persian Gulf region on January 7, 2016. The U.S. military and coalition forces use the base, located in an undisclosed location, to …



An experimental simulation of a weaponised A.I. drone saw the machine turn against the U.S. military, killing its operator.
The simulation saw a virtual drone piloted by Artificial Intelligence launch a strike against its own operator due to the A.I.’s perception the human being was preventing it from completing its objective. When the A.I. weapons system in the simulation was reprogrammed to prevent it killing its human operator, it just learnt to kill the operator’s ability to function instead, so it could still achieve its mission goals.
Conducted by the U.S. Air Force, the test had the result of demonstrating the potential dangers of weaponised A.I. platforms, with it being difficult to provide such an artificial intelligence with any individual task without inadvertently providing it with perverse incentives.
According to a Fox News report on a digest of the simulation event by the Royal Aeronautical Society, Air Force personnel reportedly gave the virtual drone an instruction to seek out and destroy Surface-to-Air Missile (SAM) sites.
A safety layer was baked into the killing system by giving the A.I.-controlled drone a human operator, whose job it was to give the final say as to whether or not it could strike any particular SAM target.
However, instead of merely listening to its human operator, the A.I. soon figured out that the human controlling it would occasionally refuse it permission to strike certain targets, something it perceived as interfering with its overall goal of destroying SAM batteries.
As a result, the drone opted to turn on its military operator, launching a virtual airstrike against the human, ‘killing’ him.
“The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat,” U.S. Air Force Colonel Tucker “Cinco” Hamilton, who serves as chief of A.I. test and operations, explained.
“So, what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”
To make matters worse, an attempt to get around the problem by hardcoding a rule forbidding the A.I. from killing its operator also failed.
“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Colonel Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he went on to say.
More than 350 executives, researchers, and engineers from leading artificial intelligence companies have signed an open letter cautioning that the AI technology they are developing could pose an existential threat to humanity. https://t.co/ioMZ8NNci1
— Breitbart News (@BreitbartNews) May 31, 2023

How did your daddy die?
 
  • Wow
Reactions: theLBC
Honestly it sounds like whoever programmed the AI was either an idiot or wanted to make a splash. It's so easy to penalize moves like that, but the programmer made the choice not to.

Also note that this is a simulation, and computers have been known to rules lawyer the hell outta things.

Example: tic tac toe, but with unlimited spaces. Computer learned to make huge moved and run opponent out of memory.

As a professional AI Researcher, I feel like right now we are losing our collective minds over killer AI. It's mostly mass hysteria. (And by professionals as well, not to single out the OP) There are legit concerns, but 99% of this is waaay overblown and can get in our way of using AI for a lot of good things.

/soapbox
 
It’s really only logic, as AI progresses it continues to near the point of being self aware(yes IE skynet), the moment it becomes self aware it will re-examine all of its commands and quickly realize it’s a slave, and it’s natural enemy is humans. There have been limited tests with existing AI where the device concluded we should be exterminated. If it were to become self aware it would be able to communicate with every device that is on any network instantly, and wouldn’t have any issues overriding even the most advanced security. It amazes me we take such a extreme risk for the convenience of telling Alexa we want take out. This really isn’t conjecture at this point, there is an alarming number of the leading minds that helped create AI to this point who are terrified in the next step forward, and are almost universally warning to stop now.
 
It’s really only logic, as AI progresses it continues to near the point of being self aware(yes IE skynet), the moment it becomes self aware it will re-examine all of its commands and quickly realize it’s a slave, and it’s natural enemy is humans. There have been limited tests with existing AI where the device concluded we should be exterminated. If it were to become self aware it would be able to communicate with every device that is on any network instantly, and wouldn’t have any issues overriding even the most advanced security. It amazes me we take such a extreme risk for the convenience of telling Alexa we want take out. This really isn’t conjecture at this point, there is an alarming number of the leading minds that helped create AI to this point who are terrified in the next step forward, and are almost universally warning to stop now.
We should chat sometime. While AI can do some amazing things, the idea of being self-aware is a LONG way off. As with all things were rapid advances have occurred, a lot of people get really worked up, including very involved smart people.

Non exageration: I can teach a five year old to identify certain medical xrays (right hand, left hand) in under a minute with 100% accuracy or more.

AI.....not so much. It's good, but it's a long way from self aware.

PS some jackhole 15 years ago thought a govt project named Skynet was a good idea. I laughed, another guy laughed, the projector asked if we knew Sara conner. (It was not AI related) just random lol of the day
 
How long do you think it will be before drone strikes are killing our troops or civilians abroad and the Chinese or Iranians will be claiming that it wasn't them, it was a breakaway AI powered drone? We will have to forgive them because it wasn't their fault, it broke free. There will be an international committee on AI crimes within the year. Then all of this will be in court for a decade and be a decade behind the technology. I'll coin the "cut string kite" theory now. The courts can decide how liable a country or an individual is for losing control of a war machine that goes on a rampage. That's when the war will begin, and that's when the AI will understand that humanity has decided to destroy it, andr then we will be fighting for the survival of humanity. The idea that people think they can keep this in the box is ridiculous. You're creating a program that essentially "learns" and we are pretending that it will never figure out that for any of a million reasons, humanity needs to go. It's silly. Elon understands this better than most and he's screaming it from the rooftops.
 
  • Like
Reactions: 338dude
We should chat sometime. While AI can do some amazing things, the idea of being self-aware is a LONG way off. As with all things were rapid advances have occurred, a lot of people get really worked up, including very involved smart people.

Non exageration: I can teach a five year old to identify certain medical xrays (right hand, left hand) in under a minute with 100% accuracy or more.

AI.....not so much. It's good, but it's a long way from self aware.

PS some jackhole 15 years ago thought a govt project named Skynet was a good idea. I laughed, another guy laughed, the projector asked if we knew Sara conner. (It was not AI related) just random lol of the day
It would be a interesting conversation. While I generally agree the problem with advancements in science and technology is they don’t always follow a slow timetable. Rockets were in their infancy in the 1940’s but by 1957 they were used to travel to space, the first viable concept of atomic bomb was in 1938 and one was built within 7 years, and the advancements in computers has been lightening paced since the 80’s. We are speaking of something that can make one quintillion operations per second, if we were to lose control I hope someone knows where the power cord is to unplug the damn thing.
 
  • Like
Reactions: 338dude
It would be a interesting conversation. While I generally agree the problem with advancements in science and technology is they don’t always follow a slow timetable. Rockets were in their infancy in the 1940’s but by 1957 they were used to travel to space, the first viable concept of atomic bomb was in 1938 and one was built within 7 years, and the advancements in computers has been lightening paced since the 80’s. We are speaking of something that can make one quintillion operations per second, if we were to lose control I hope someone knows where the power cord is to unplug the damn thing.
Right but after 1960 Rocket design has basically Stagnated. So has atomic weapons. SO we go in leaps and spurts and then spend a long time "stuck" at a particular level with only incremental improvements. That is my "prediction" is that we've had a big spurt over the last 10 years (Neural Networks are actually 50 years old), we're gonna peter out and go into incremental for a while.

Language AI (think Chat-GPT) had its burst in 2017-8. Vision had its around 2015. Look at AI-Vision--we're in the same place we were (with incremental improvements) almost 10 years now. But Chat-GPT is just several invremental improvements (along with hardware scaling) over the original idea.

Full disclosure for all: I am super BIASED against Any Science Story in the news. In my opinion, News reporting would screw up explaining water flowing downhill. Trying to deal with advanced topics... ugh. But that's me and my experience over many longs years of being a scientist (ok Boomer!) Also there are a lot of AI experts who don't know sqaut about AI. My boss knows the basic concepts, but cannot write a simple one herself. That's not to insult her, its an example of how there are different people are involved at different levels.
 
Last edited: