In a multi-domain operational context, how AI and autonomy could help :
- to synchronise effects requests on the battlefield ?
- and then prioritize/select/recommend the best courses of actions to deliver on target on time?
- ensure the requested effect is actually delivered to the proper target, respecting all constraints (ID of the target, ROE...) ?
As it is to be turned in a non-specific non-military problem for the challenge, I see a parallel with mail or goods delivery (thinking of how Amazon plan to do that for instance, from advertising, on-line ordering, selection in stocks, prioritizing of deliveries, delivery with clear identification, using AI, Autonomous Systems, maybe blockchain, and cyber security).
There could be a scenario in the urban environment and a scenario in the open countryside for instance.
Human Machine Teaming/cyber:
* Use AI to detect a cyber-attack (such as DDOS), take initial steps to mitigate the attack and alert security personnel for final review and dispensation.
* Use AI to detect and identify opposing force messaging strategies such as making up events, embellishing actual events, portraying NATO forces as dangerous/evil, or sowing discontent in the population by attacking the government or emphasizing discord between ethnic factions.
* Automated detection of change in disinformation/influence strategy of opposing forces. For instance, changing from making up events that never happened to embellishing actual events. Or just changing the messaging strategy to target audience. For example, changing from saying that NATO is dangerous to emphasizing ethnic differences/inequalities
* things like explainable recommendations, or adapting to human cognitive functioning (e.g. adjusting the level of augmentation based on whether an operator is overwhelmed or tired or needs to be more engaged), or assisting an operator by recognizing potential common cognitive errors.
The other option would be to demonstrate the ability of a machine (computer, etc...) to detect disengagement, or to develop a strategy to help human operators overcome common cognitive biases. Although, I think it would still have to be in the context of a specific task.
AI in Education Training Evaluation and Exercise:
* What do nations consider to be their motivators for introducing AI into operations and into ETEE (as a means of improving ETEE)?
* What do nations believe is required to integrate and to employ AI in ETEE in terms of human and other resource requirements?
* Have nations examined competencies required for the integration and employment of AI in operations? If yes, what are those competencies (in terms of skills, knowledge, and attitude; in terms of workforce capacities)?
* Have nations examined other resource requirements to integrate and employ AI in ETEE? If yes, what are those requirements?
* What do nations consider to be their most important challenge to the integration of AI in operations and in ETEE?
* What do nations expect to achieve by employing AI in ETEE (such as improved operational effectiveness? Shortened times to reach readiness levels? Etc)
AI/Autonomy in support of Urban Operations:
* AI and human-machine interfaces supporting commander coordination functions
* Persistent Autonomous ISR
* Unmanned aerial systems for vertical manoeuvre (persistent autonomous sustainment)
* Means to counter adversary use of drones, swarm weapons, and loiter munitions, etc.
* In an urban environment, which ways could autonomous vehicles be used to protect the area and maintain stability in the region?
Counter Autonomous Systems:
* Drones/autonomous aerial vehicles are becoming more and more prevalent. What are methods to detect and stop encroachment, overflight, spying, by single and large quantities of intelligent swarms.
o Drones have been used to fly supplies into prisons, they have impacted flight around airports, and they have been used for industrial espionage.
As it is to be turned in a non-specific non-military problem for the challenge, I suggest that we forget about the ROEs (covering current unsolved issues such as possible use of force against ENY autonomous systems, use of force via our potential own autonomous systems, integration of such autonomous systems into a MN coalition, etc...).
But the issue of civil and criminal liability (who is accountable for wrong actions perpetrated by an autonomous systems ; the scope of such a question is indeed very wide) could be addressed during this challenge, allowing to us to have a better understanding on how these topics are dealt with, in a hands-on perspective, in the civilian sector.
Just to highlight some points, the answers on criminal liability (or accountability as seen sometime in articles) is of direct interest to a military commander. Such as those given to the review of the asset before it is declared "good for service".