I think there was a story I read, years ago, about a computer being used to identify and destroy AA platforms in a simulated war exercise. Back then (back in my day) it wasn’t called AI, it was a machine learning program. But anyway, the computer was scored based on how many targets it successfully destroyed. However, it could only engage targets when a clear to fire authorization was given by a human.
Eventually then endeavor was abandoned because the computer figured out it was humans preventing it from getting a higher score. So, mathematically it made sense for the machine to kill its handlers so as to grant itself fire-at-will capabilities.
It may have been a sci-fi piece and not a real event. It becomes more and more difficult to tell now a days.
I think there was a story I read, years ago, about a computer being used to identify and destroy AA platforms in a simulated war exercise. Back then (back in my day) it wasn’t called AI, it was a machine learning program. But anyway, the computer was scored based on how many targets it successfully destroyed. However, it could only engage targets when a clear to fire authorization was given by a human.
Eventually then endeavor was abandoned because the computer figured out it was humans preventing it from getting a higher score. So, mathematically it made sense for the machine to kill its handlers so as to grant itself fire-at-will capabilities.
It may have been a sci-fi piece and not a real event. It becomes more and more difficult to tell now a days.