Posted: 09/25/2020

Matthew’s input 24 Sep

 

I’ve spent some time this week thinking about your question below. I’ve kicked a few different ideas around including:

 

-          NATO is concerned about a “terrorist” cell attempting to massively disrupt any of a number of different critical strategic commercial ports by actors depositing and unknown number of bottom-laying mines throughout a harbor.  Shipping traffic would halted until the channel could be declared clear and safe. An autonomous underwater vehicle is desired to be used to scout, identify, mark, and later destroy any mines found.

-          NATO is concerned about an aggressive foreign navy attempting to buy time for an offensive elsewhere in Europe by disbursing mines outside a harbor during a NATO surface group port visit. The NATO ships would not be able to leave the harbor and attempt to assist in combating the aggressive activities of the foreign forces until a channel could be cleared and declared safe for transit. An autonomous underwater vehicle is desired that can be quickly programmed to the area and task, and then rapidly deployed in order to scout, identify, mark, and destroy all mines necessary to create a navigable channel.

 

Or along a different theme:

 

-          NATO intends to expand the ASW capabilities of it’s surface forces by introducing various operational autonomous vehicles. These include environmental collection vehicles, passive acoustic detection and reporting vehicles, and vehicles capable of identifying, targeting, and engaging identified hostile submarines. This fleet of autonomous vehicles should be able to communicate and coordinate with each other, as well as reliably report and get updated orders through a C2 system.

 

These are of course very standard types of scenarios, but I think that’s probably for the best, as it makes it easier to focus on the challenges of the problem, as discussed below. Each of these scenarios has a clear problem, identified goals, and clearly indicates where the autonomous vehicle fits in. I think any of these scenarios could then support having participants building towards the goals we discussed. I am not sure if the formatting was off in the email I sent previously, so updating it, as well to be more clear.

 

1)      The deployment of autonomous systems and platforms within our organizations is expected to exponentially increase in the near future. This comes with the challenge of rethinking many aspects of human-machine interfacing and interactions.

 

2)      We are looking for solutions that address and attempt to mitigate the gaps that operators and decision-makers are likely to encounter when deploying these technologies – to include human-machine interface and interaction concepts of operations, standards, systems, and training requirements.

 

3)      Potential solutions should attempt to:

 

a.       Reduce the risk of failures, while decreasing the likelihood of accidents and collateral damage

                                                               i.      Reducing or mitigating risk during the transition of new, upgraded, or updated systems and platforms with integrated AI and machine learning (ML) capabilities – especially with regard to surfacing the gaps between the training/testing environments and the expected “real world” operating environment.

b.      Improve the likelihood of success by enabling a better informed decision-making process

                                                               i.      Improving the decision-making process for “in-the-loop” operators and decision-makers by better capturing and conveying the expecting performance of AI and ML systems and platforms given the known training/testing results adjusted to account for unknowns, uncertainties, and out-of-scope aspects of the “real world” operating environment.

 

 

Regarding “autonomous pain points” at MARCOM… Honestly, I don’t think I have encountered any discussions or mentions of autonomous vehicles while working here (granted only a few months). I would guess that it is not considered much, if at all -- and when it is, it is likely only considered as a far-future issue that does not factor into any near-term plans or considerations. If it does at all, it would only be as something included in an exercise as an “experiment” – far, far from operational.

 

Warm Regards,

Matthew MCKENZIE

Commander (OF-4) USA Navy

 

 

Jerry’s comments

We had a few of us play with ideas last night at EntreBrew. Here are some of the observations and thoughts, then a set of ideas for a scenario

 

Observation / Thoughts

  • The actual "human-machine interface" and "understanding of strengths and weaknesses" are not really challenge topics. In practical terms, each would be driven by design and implementation of a particular technology. The function of the interface would come during design, the understanding of what the system does, its limitations and risks, and how to use it would be specific to the training developed for the technology.
  • We concentrated on undersea use cases - largely because Wes Biggers who was in the group is a former USN submariner.
  • We thought the issues associated with underwater unmanned systems operating in swarms would provide a good set of issues to target.
  • Three areas of concern for these swarms to deliver value and accomplish a mission would be:
    • Knowledge of position (Individual and group)
    • Coordination of action
    • Execution of mission
  • The coordination of action and execution of mission can be done autonomously or under human control.
  • There are issues however with ability of the humans to communicate with the devices in the subsea environment, and with the limitations of AI/ML for operating and executing in a fluid environment where the AI/ML does not have enough learning to be effective. For this reason, some hybrid autonomous/human approach is called for.

Here are some thoughts for a challenge:

 

Possible Military Scenario:

  • Security of harbors during wartime and even during peace is of significant importance to alliance naval forces. A future approach to establishing security is the concept of a fleet of unmanned semiautonomous vehicles (air, surface, and subsurface) acting as a group to provide a near realtime picture of events in the harbors, identify possible threats, and take action against those threats. This "harbor picket" may be established in existing military harbors - such as Norfolk or Toulon, or may be deployed to secure new locations where the alliance is establishing a logistics beachhead.

Possible Commercial Scenario

  • Commercial harbors have billions of dollars of shipping passing through them. The economic impact of a harbor shutting down for even and hour can be in the multiple millions of dollars. Harbor shut downs occur  when adverse events such as hurricanes or disasters occur, or when material enters the waterway that threatens the safety of ships. Reopening the harbors requires an assessment of such diverse things as buoy positions and shoaling. Preventing the harbor from closing in the first place may be aided by identifying problems quickly and mitigating them before they have an impact.

This challenge is looking to address some of the technical issues associated with the deployment of undersea drones. Topics of interest include:

  • Enabling undersea drones and swarms to determine their exact location
  • Communication between UAVs in the subsurface environment
  • Communication between UAV swarms and human operators on the surface
  • Coordination of action between UAVs to avoid collision and accomplish missions
  • A mechanism to allow hybrid control of UAVs - accounting for lag times between human operators receiving information, transmitting decisions to swarm, and swarms actions - to mitigate navigation risks and perform missions.

Matthew’s proposal

The deployment of autonomous systems and platforms within our organizations is expected to exponentially increase in the near future. This comes with the challenge of rethinking many aspects of human-machine interfacing and interactions.

 

We are looking for solutions that address and attempt to mitigate the gaps that operators and decision-makers are likely to encounter when deploying these technologies – to include human-machine interface and interaction concepts of operations, standards, systems, and training requirements.

 

Potential solutions should attempt to:

-         Reduce the risk of failures, while decreasing the likelihood of accidents and collateral damage

o    Reducing or mitigating risk during the transition of new, upgraded, or updated systems and platforms with integrated AI and machine learning (ML) capabilities – especially with regard to surfacing the gaps between the training/testing environments and the expected “real world” operating environment.

-         Improve the likelihood of success by enabling a better informed decision-making process

o    Improving the decision-making process for “in-the-loop” operators and decision-makers by better capturing and conveying the expecting performance of AI and ML systems and platforms given the known training/testing results adjusted to account for unknowns, uncertainties, and out-of-scope aspects of the “real world” operating environment.

 

I understand the two above can seem very similar, but I hope after our conversation that it is more clear how they are distinct and equally important. I believe they both play a key aspect in “trust,” as there are often two types of mindsets – the risk-adverse and the success-minded. I think a healthy leader/decision-maker incorporates both.

 

Warm Regards,

Matthew MCKENZIE

Commander (OF-4) USA Navy

 

 

 

Cecile’s idea

 

Autonomy coupled with Artificial Intelligence are rapidly integrate into our everyday lives. Theirs integration in the Naval Warfare is more and more a reality. To transform this revolution into a success story, operators need to trust the machine. It is a challenging topic, in particular in areas like ASW, MCM.

What do you think about an innovation competition to focus on the best and sustainable ideas to build the good ingredients for Trust ?

 

Building Trust in a High-Risk Environment

Unlike many other types of technology development - ones that can be neatly encapsulated and that follow a more traditional pathway of research, testing, fielding, and sustainment - AI-related systems and platforms, by their very nature, exist in a constant state of development. AI technology development can be thought of as spiral-like in nature, constantly updated and optimized as performance and observational data is collected and fed back into the system. This enduring feedback loop demands expert interaction for maximum success - to include a constant awareness of current operational capabilities, understanding of the latest system refinements, and knowledge of relevant and vital observation data.

 

Additionally, the DoD’s AI strategy, which includes plans to rapidly iterate and deploy small-scale mission support and operational AI capabilities throughout the force, adds another critical dimensionality regarding in-stride VVT&E.[1] The level of complexity inherent to AI systems translates to systems that are extraordinarily difficult to test and evaluate in a lab or closed environment. Additionally, to receive the full benefit from AI systems, they must operate in the spiral R&D format described above. In this manner, collected observations are retrained back into the system, providing the necessary “learning” aspect inherent to AI and machine learning functionality.[2] This is a natural concept for AI professionals to understand, as it is a core AI tenet, but to DoD leaders and warfighters responsible for the outcomes of operational AI platforms and systems, accepting that VVT&E is going to occur during all aspects of operations will demand a strong change in mindset with regard to how risk is assessed and managed.

 

Considering the generally risk-adverse nature of the DoD, this could prove to be a cultural barrier that is impossible to overcome without intervention. A data professional, or team of data professionals, could serve as the key link in these scenarios between the uncertainty in the outcome of operational AI systems and the minimum assurance commanders and leaders need when making decisions. The NDRI affirms this consideration, noting that multiple case studies found the only way to overcome these types of systematic distrust in AI technologies was by having an AI-technologist embedded with the user and integrated into the team itself.[3] It is hard to envision commanders taking these types of risks without local, trusted elements to offset the unknowns inherent to AI systems.”

 

I have a hard time envisioning much of the needed change to fully integrate AI, but I’m glad that there are parts of NATO that are recognizing the need to discuss and bring these issues to light.

 

Please let me know if I can assist or help with this – or any other! – innovation challenge or program. I’m in full support of the need for NATO to start tackling this problem.

 

Additionally, if you would like to discuss additional ideas, or refinements, of similar topics for challenges, programs, or discussions, I have a number of related topics and problem sets that might be of interest.

 

Warm Regards,

Matthew MCKENZIE

Commander (OF-4) USA Navy