Course of Action Display and Evaluation Tool: Difference between revisions

Content deleted Content added
m References: clean up, removed uncategorised tag
Cz13sz17 (talk | contribs)
Removed Close paraphrasing note, after review and concurrence of an editor.
 
(One intermediate revision by the same user not shown)
Line 2:
{{Orphan|date=June 2025}}
 
{{Close paraphrasing|date=June 2025}}
 
'''Course of Action Display and Evaluation Tool''' (CADET) was a research program, and the eponymous prototype software system, that applied knowledge-based techniques of Artificial Intelligence to the problem of battle planning. CADET was also known as Course of Action Display and Elaboration Tool.<ref name="Rasch-2003-Incorporating">Rasch, Robert, Alexander Kott, and Kenneth D. Forbus. "Incorporating AI into military decision making: an experiment." IEEE Intelligent Systems 18.4 (2003) pp. 18-26.</ref>
Line 12 ⟶ 11:
The development of Course of Action Display and Evaluation Tool (CADET) began in 1996, at the Carnegie Group, Inc.,<ref>Phillips, Eve Marie. If it works, it's not AI: a commercial look at artificial intelligence startups. Dissertation, Massachusetts Institute of Technology, 1999.</ref> Pittsburgh PA, funded under the [[Small Business Innovation Research]] (SBIR) program. The goal of the first phase SBIR project was to produce “...a live storyboard of [Course of Action] COA development, wargaming, animation, and assessment.”<ref name="Ground-2002-Knowledge">Ground, Larry, Alexander Kott, and Ray Budd. A knowledge-based tool for planning of military operations: The coalition perspective. Technical Report, BBN Technologies, Pittsburgh PA, 2002. Online at https://apps.dtic.mil/sti/pdfs/ADA402533.pdf</ref>
 
In 1997, the United States Army awarded the Carnegie Group Inc. $750K for SBIR Phase II. The intent was to develop “...a war-gaming modeling and analysis Decision Support System (DSS), … CADET will consist of a combination of Knowledge-Based and decision analytic tools and technologies to provide fast nimble COA war-gaming modeling, simulation, and animation under direct control of the commander and staff. ...Phase II will result in an operations prototype (OP) suitable for use and evaluation in field exercises. A fully functional COA analyzer/wargaming DSS for the commander and staff would be developed in Phase III.”<ref>{{cite web | url=https://www.sbir.gov/awards/28447 | title=Award &#124; SBIR }}</ref>
 
In 2000, CADET was integrated and experimentally evaluated within the framework of the Integrated Course of Action Critiquing and Elaboration System (ICCES) experiment, conducted by the Battle Command Battle Laboratory – Leavenworth (BCBL-L) aswithin athe result of a TRADOC sponsoredprogram Concept Experimentation Program (CEP) sponsored by TRADOC.<ref name="Rasch-2002-AI">Rasch, Robert, Alexander Kott, and Kenneth D. Forbus. "AI on the battlefield: An experimental exploration." In AAAI/IAAI, pp. 906-912. 2002. Online at https://www.qrg.northwestern.edu/papers/Files/AI_in_MDMP_IAAI02.pdf</ref>
 
In 2000-2002, DARPA applied CADET in the program titled Command Post of the Future ([[Command Post of the Future|CPoF]]) as a tool to providegenerate a maneuver course of action. Under the umbrella of the CPoF program, CADET was integrated with the FOX GA system to provide a detailed planner, coupled with COA generation capability. In the same period, Battle Command Battle Lab-Huachuca (BCBL-H) integratedperformed an integration CADET with the system called All Source Analysis System-Light (ASAS-L); tohere provideCADET awas plannerintended to generate plans for intelligence assets, and toconduct wargamewargames enemyof different COAs, againstenemy friendlyversus COAsfriendly.<ref name="Ground-2002-Knowledge" /><ref name="Ruda-2001-Distributed">Ruda, Harald, Janet Burge, Peter Aykroyd, Jeffrey Sander, Dennis Okon, and Greg L. Zacharias. "Distributed course-of-action planning using genetic algorithms, XML, and JMS." In Battlespace Digitization and Network-Centric Warfare, vol. 4396, pp. 260-269. SPIE, 2001.</ref>
In 2000-2002, DARPA applied CADET in its Command Post of the Future ([[Command Post of the Future|CPoF]]) program
as a tool to provide a maneuver course of action. Under the umbrella of the CPoF program, CADET was integrated with the FOX GA system to provide a detailed planner, coupled with COA generation capability. In the same period, Battle Command Battle Lab-Huachuca (BCBL-H) integrated CADET with All Source Analysis System-Light (ASAS-L) to provide a planner for intelligence assets and to wargame enemy COAs against friendly COAs.<ref name="Ground-2002-Knowledge" /><ref name="Ruda-2001-Distributed">Ruda, Harald, Janet Burge, Peter Aykroyd, Jeffrey Sander, Dennis Okon, and Greg L. Zacharias. "Distributed course-of-action planning using genetic algorithms, XML, and JMS." In Battlespace Digitization and Network-Centric Warfare, vol. 4396, pp. 260-269. SPIE, 2001.</ref>
 
From 1996 through 2002, work on CADET was performed by the Carnegie Group, Inc., and supported by funding from the US Army [[CECOM]] (CADET SBIR Phase I, CADET SBIR Phase II and CADET Enhancements); DARPA (Command Post of the Future); and [[United States Army Training and Doctrine Command|TRADOC]] BCBL-H.<ref name="Kott-2002-Toward">Kott, Alexander, Larry Ground, Ray Budd, Lakshmi Rebbapragada, and John Langston. "Toward practical knowledge-based tools for battle planning and scheduling." In Proceedings of AAAI/IAAI, pp. 894-899. 2002. Online at
Line 35 ⟶ 33:
Taking this input, CADET automatically performed the following tasks (not sequentially):<ref name="Ground-2000-CADET"/><ref name="Kott-2005-Building">Kott, Alexander, Ray Budd, Larry Ground, Lakshmi Rebbapragada, and John Langston. "Building a tool for battle planning: challenges, tradeoffs, and experimental findings."Applied Intelligence 23, no. 3 (2005): 165-189.</ref>
 
* Planning and scheduling the detailedlow-level tasks requirednecessary tofor accomplisha the specifiedgiven COA
* Allocating tasks to thevarious diverseunits forcesand assets constituting the brigade
* Assigning suitable locations and routes
* Estimating friendly and enemythe battle losses (attrition) of friendly and enemy forces, and consumption of resources (e.g., fuel and ammunition)
* Predicting enemy actions or reactions.
 
Line 66 ⟶ 64:
== Evaluation ==
 
Two evaluation experiments are described in literature. The first experiment called ICCES took three days. andThe involvedsubjects eightwere Army officers (majors and lieutenant colonels) from combat arms branches, with 11 to 23 years of active service, in the ranks of majors and lieutenant colonels, a total of 8. Each officersofficer was given 4 hours of training learning to operate CADET and related computer tools. Officers were divided into two groups and given a tactical scenario. One group (the control group) used the traditional, manual process; the other used the system called ICCES, the automated core of which was CADET. Each group deliveredproduced three COA sketches and statements and one COA synchronization matrix. Then, the experiment was repeated with another scenario but the control group became the automated group and vice versa. The users were generally satisfied with the quality of the planning products that ICCES -generated products. The group using ICCES made only a few changes to the product that was automatically generated product, indicating that they agreed with the majority of the plan that ICCES produced.<ref name="Rasch-2003-Incorporating" />
 
The second experiment was reminiscent of [[Turing test]]. The experiment involved one user, nine judges (active-duty officers, mainly colonels and lieutenant colonels), and five scenarios obtained from several US Army exercises. For each scenario, experimenters obtained synchronization matrices that were produced in earlier exercises, typically by a team of four to five officers in three to four hours, forspending a total of aboutapproximately 16 person-hours perin planning producttotal. Using these scenarios and COAs, the user had CADET generate automatically detailed plans and express them as synchronization matrices. The user, a retired US Army officer, reviewed and slightly edited the matrices. The entire process took less than two minutes of CADETcomputations executionby and approximately 20 minutes of review and post-editing, for a total of approximately 0.4 person-hour in total per product. The experimenters gave the resulting matrices the same visual style as thatthose ofproduced human-generatedby setshumans. The judges, who did not know whether a planning product was produceda bytraditional theproduct traditionalof manual processhumans, or with computerized aids, were asked to grade the products. The result was that the average grades for manual products and CADET-generated products were statistically indistinguishable, even though CADET-generated products required far less time to produce.<ref name="Kott-2005-Building" />
 
== Legacy ==