News

AI in Movies Fuels Fears, But Military AI Is the Real Threat

In the deep sea, a Russian submarine was suspected to be attacked by an unknown torpedo, and the captain ordered to launch the torpedo in response. After a while the enemy submarine suddenly disappeared from the radar. Everyone thought that there was something wrong with the equipment, and they breathed a sigh of relief. Unexpectedly, the torpedo launched just now turned around by itself and blew up the submarine to pieces.

This is the first scene of the new Hollywood movie “Mission: Impossible 7”. Different from the past, the villain who blows up the submarine is a super AI named “Smart Body”.

Only a metal key can close the agent. In order to fight for the key, the protagonist Ethan, played by Tom Cruise, waged a desperate struggle with Zhiti and his henchmen.

And off-screen, this month, Hollywood actors and screenwriters broke out a huge strike demonstration. One of the reasons for this strike is the threat that generative AI poses to the film and television industry. Actors worry that digital avatars created by AI will one day replace them, and screenwriters reject the reality that it may come: AI writes the first draft of the script, and they come to polish and revise it.

Tom cruise also publicly supported Hollywood’s resistance to AI, calling on the producers not to ignore the actors’ concerns about AI.

The advancement of AI technology is an unavoidable topic in this era. Looking at it from another angle, the confrontation between humans and AI in “Mission: Impossible 7” is like a metaphor for Hollywood’s resistance to artificial intelligence.

People’s Fear Makes AI’s Capabilities Exaggerated
“Mission: Impossible 7” director Christopher McQuarrie believes that our current zeitgeist is the anxiety brought about by technology and how technology affects people’s lives. He said that AI is a lot like nuclear bombs during the Cold War, needless to say, everyone feels it.

The story deftly appeals to the emotions of today: the fear and concern of ever-advancing AI. However, AI is still far from reaching this level of intelligence.

In the movie, the intelligent body may be the most terrible enemy the hero faces, because it has evolved to have self-awareness and can predict every move of human beings, thus almost cornering Ethan and his team.

Another frightening factor is that humans have no way of knowing what the intelligence wants, and its motivations remain a mystery. Perhaps the next movie will answer this question. But in reality, AI itself is a huge black box, and no one has the confidence to explain what is going on.

As early as ChatGPT became popular around the world, at least some people firmly believed that generative AI would pose a threat to human survival. Fear is based on the unknown, but the truth is that generative AI is still in its early days and its capabilities are overhyped.

Is it possible for artificial intelligence to gain self-awareness, or even desire and greed? Sandra Wachter, a professor at Oxford University, denied this. In an interview with the British media Evening Standard, he said that the public confuses generative AI and AGI (general artificial intelligence), “There is no evidence that we are developing towards AGI, and it is hard to say whether this road exists or not.”

The intelligent body can delete part of the monitoring screen at any time, and even invade AR glasses to pretend to be a human voice. According to the British media Evening Standard, Laura Concala, an expert from the IT security company F-Secure, said that erasing surveillance video images is theoretically feasible, but to achieve real-time deletion, the computing resources required behind it are huge, at least not yet.

As for the submarine incident at the beginning of the film, Toby Lewis, an expert at the cybersecurity company Dargtrace, said that AI may enter through the submarine’s communication channels when the submarine is docked in port or surfaced.

James, another expert, objected to the idea, telling the Evening Standard that communication channels between military submarines and headquarters would be severely restricted in order to hide their location. AI attempts to enter through communication channels, attack military computer systems, and pollute submarine sonar systems are unlikely to happen. “But it’s a great movie storyline.”

Still, concerns about AI getting out of control have led to growing calls for its regulation and restrictions. As early as March of this year, Musk and a series of Silicon Valley celebrities and scientists signed a joint letter calling for a six-month moratorium on the development of generative AI. The joint letter gathered more than 33,000 signatures.

Their concerns are justified, after all, the speed of technological progress may exceed ordinary people’s imagination. Some seemingly distant technologies have become reality, such as fingerprint recognition commonly used by protagonists in movies, drone devices, and AR glasses.

AI weaponization will be inevitable
The loss of control of the mind and body is an unnecessary disaster, but in the whole movie, only Cruise wants to completely destroy it. The screenwriter borrows a character’s lines to tell the truth: not to destroy it, but to control it.

To be fair, it is still early days for AI to threaten humanity, and humans using AI against each other is the growing threat. In other words, artificial intelligence has inevitably entered the field of national defense and military affairs, and has even been used in modern warfare.

AI technology will make warfare possible with fewer casualties. UAVs equipped with AI can navigate complex environments, conduct reconnaissance with tremendous precision, and launch precise strikes on targets. This “beheading operation” is also extremely deterrent. The U.S. Department of Defense is reportedly developing AI robots that can fly fighter jets. “In the future, there will be far more drones than there are armed forces,” said Douglas Shaw , a military consultant .

AI systems will change war decision-making. The intelligence analysis provided by AI shortens the decision-making window from days or hours to minutes. Herbert Lin, a professor at Stanford University, said in an interview with The New York Times that because AI’s computing speed is much faster than that of humans, leadership may over-rely on AI’s intelligence. So, what if AI gives false information or even makes wrong decisions?

This is not without precedent. As early as during the Iraq war, the missile system of the US military had shot down friendly fighter jets, which proved to be the autonomous operation of the system.

In the Middle East battlefield in 2020, drones known as “loitering ammunition” can patrol autonomously without human remote operation. The drone has built-in explosives, automatically identifies ground targets, and then swoops over to bomb them. AI automated killing is already a reality. If AI mistook civilians for terrorists and hospitals for military bases, the consequences would be disastrous.

In order to avoid such tragedies, in January this year, the U.S. Department of Defense stipulated in the directive on the use of AI weapon systems that the development and deployment of AI weapon systems must at least have human judgment.
There are plenty of people who oppose AI weapons. When Kissinger was born at the age of 100, horses were still needed for human warfare. Now the former US Secretary of State is thinking about AI warfare. He called on the international community to suspend the development of militarized AI. “It is time to use new international laws to regulate these technologies.”

However, the possibility of a pause is slim, and in the current tense international situation, the artificial intelligence arms race cannot be completely ended.

Pentagon official John Sherman said publicly, “If we stop, potential adversaries overseas will not stop. We must move on.”

Famous director Cameron has similar concerns. Last week, Cameron warned in an interview with the media: “I think the weaponization of artificial intelligence is the biggest danger. I think we will get into an artificial intelligence race similar to a nuclear arms race. If we don’t develop it, someone else will definitely develop it, and then it will escalate.”

At present, what countries can do is to call for some restrictive measures, such as “responsible deployment and use of artificial intelligence”; strict “human control and participation” in all aspects of nuclear weapons, not giving the right to launch to computers; strict human supervision, testing and verification at each stage of military AI; AI systems should be clear from the beginning.

On July 18, the UN Security Council held a meeting for the first time on the threat of AI to international peace and stability. Secretary-General Antonio Guterres said the United Nations must reach a legally binding agreement banning the use of artificial intelligence in autonomous weapons by 2026.

Truth be told, no one really knows what newer and more powerful AI technologies can do in the military sphere. The risks are increasing day by day. The only bit of good news is that major powers may be more cautious when exploring AI military applications, which is no joke.

error: Content is protected !!