By AI Trends Staff
As the US Navy strives to get to a 355-ship fleet by 2034, part of a 30-year plan the Navy outlined in 2019, it is finding it economical to use underwater autonomous vehicles. They meet many of the requirements of larger submarines at a fraction of the cost.
For example, the Navy awarded Boeing a $43-million contract to build four Orca Extra Large Unmanned Undersea Vehicles (XLUUVs) last year, described in a release from USNI News.
Boeing based its winning Orca XLUUV design on its Echo Voyager unmanned diesel-electric submersible. The 51-foot-long submersible is launched from a pier and can operate autonomously while sailing up to 6,500 nautical miles without being connected to a manned mother ship, according to the Navy.
In military spending terms, the versatility of the Orca at the price is “fairly unheard of in military spending,” according to an account in The National Interest. The nearest equivalent was cited as the Navy’s Littoral Combat Ship, which costs $584 million each and has a crew of 40. While the LCS is faster, has an onboard crew, and has a larger payload, the Orca is autonomous, and cheaper by orders of magnitude.
Not only due to President Donald Trump’s initiatives to pursue leadership in AI, the Navy is pushing into autonomous vehicles. The Navy’s autonomous Sea Hunter trimaran, engineered for minesweeping and sub-hunting, earlier this year traveled from San Diego to Hawaii and back again without a single sailor on board, in a historic cruise.
The Navy is envisioning the potential of “robot wolfpacks” of unmanned, remotely-operated surface vessels to function as scouts, decoys, and forward electronic warfare platforms, according to an account last year in Breaking Defense.
“Part of the value of having unmanned surface vehicles is you can get capacity at a lower cost,” stated Rear Adm. John Neagley, the Navy’s Program Executive Officer for Unmanned & Small Combatants.
Distributed Marine Operations Across Platforms is a Goal
The Navy is working on a communications network that can pull information from multiple ships into a single picture. Rear Admiral Douglas Small, who heads the Program Executive Office for Integrated Warfare Systems (PEO-IWS), said in the long term, the goal is “communication as a service.”
“We can improve our naval power immediately… by stitching together things that we have today,” Adm. Small stated. “We’re working really hard on concepts like integrated fire control, expanding that out to every disparate platform we have out there, just expanding our reach and really taking advantage of this concept of distributed maritime operations.”
The network should be agnostic to the hardware on any specific ship. “What’s crucial is to get the technology to the fleet, quickly, so real crews can experiment with it in real-world conditions,” stated Adm. Small.
Ships being held in port during the coronavirus pandemic has given the Navy and other federal agencies on the water an opportunity to use autonomous systems to keep work going. For example, the NOAA sent Sail Drones to Alaska to perform a critical fisher survey and for coastal mapping, according to an account in SeaPower Magazine.
“We were able to map in pretty shallow areas that would have been hazardous for ships,” stated retired Rear Adm. Tim Gallaudet, deputy administrator of the National Oceanic and Atmospheric Administration and the former Oceanographer of the Navy. He was speaking in a recent webinar hosted by the Marine Technology Society’s Washington section and the company Oceaneering.
The agency is leveraging artificial intelligence, machine learning, autonomous systems, data management and other advances and “applying those technologies in everything we do,” he said, including setting up an AI center for NOAA.
Another example is the unmanned, anti-submarine ship Sea Hunter, launched in 2016, which autonomously navigates open waters and actively coordinates missions with other unmanned sea vessels.
With the introduction of more AI and machine learning into the US Navy, come new cybersecurity challenges. A recent report from Thomas Insights issues a warning about the risk that novel machine learning algorithms can become susceptible to manipulation by adversaries, in what is known as adversarial machine learning (AML).
Adversarial machine learning (AML) is a technique used to dupe ML models into producing false or inaccurate outputs. AML can be accomplished by inserting altered or manipulated inputs into the ML model’s dataset during or after training, referred to by cybersecurity researchers as “poisoning” and “evasion.” It can also be executed by making physical, real-world alterations to objects that an AI system is expected to detect and respond to once deployed.
These tactics can have serious consequences for both national security and human life.
The most pressing of scenarios outlined in a 2018 report by the Office of the Director of National Intelligence, was the AML’s potential to compromise computer vision algorithms. For example, researchers have demonstrated that by strategically placing stickers on a stop sign, a vehicle’s object detection system can consistently misidentify it as a speed limit sign, putting the driver, passengers, other drivers and pedestrians at risk.
Countermeasures being tried by defenders include adversarial training, which involves feeding a machine learning algorithms potential small changes to an image, called ‘perturbations,” to train it to recognize the image despite the manipulation. Other methods include pre-processing and de-noising, to automatically remove any adversarial noise from inputs, and adversarial example detection, to distinguish between legitimate and adversarial inputs. These approaches try to assure that AML inputs and alterations are neutralized before they reach the algorithm for classification.