Content deleted Content added
KylieTastic (talk | contribs) m →References: clean up, removed uncategorised tag |
Removed Close paraphrasing note, after review and concurrence of an editor. |
||
(One intermediate revision by the same user not shown) | |||
Line 2:
{{Orphan|date=June 2025}}
'''Course of Action Display and Evaluation Tool''' (CADET) was a research program, and the eponymous prototype software system, that applied knowledge-based techniques of Artificial Intelligence to the problem of battle planning. CADET was also known as Course of Action Display and Elaboration Tool.<ref name="Rasch-2003-Incorporating">Rasch, Robert, Alexander Kott, and Kenneth D. Forbus. "Incorporating AI into military decision making: an experiment." IEEE Intelligent Systems 18.4 (2003) pp. 18-26.</ref>
Line 12 ⟶ 11:
The development of Course of Action Display and Evaluation Tool (CADET) began in 1996, at the Carnegie Group, Inc.,<ref>Phillips, Eve Marie. If it works, it's not AI: a commercial look at artificial intelligence startups. Dissertation, Massachusetts Institute of Technology, 1999.</ref> Pittsburgh PA, funded under the [[Small Business Innovation Research]] (SBIR) program. The goal of the first phase SBIR project was to produce “...a live storyboard of [Course of Action] COA development, wargaming, animation, and assessment.”<ref name="Ground-2002-Knowledge">Ground, Larry, Alexander Kott, and Ray Budd. A knowledge-based tool for planning of military operations: The coalition perspective. Technical Report, BBN Technologies, Pittsburgh PA, 2002. Online at https://apps.dtic.mil/sti/pdfs/ADA402533.pdf</ref>
In 1997, the United States Army awarded the Carnegie Group Inc. $750K for SBIR Phase II. The intent was to develop “...a war-gaming modeling and analysis Decision Support System (DSS), … CADET will consist of a combination of Knowledge-Based and decision analytic tools and technologies to provide fast nimble COA war-gaming modeling, simulation, and animation under direct control of the commander and staff. ...Phase II will result in an operations prototype (OP) suitable for use and evaluation in field exercises
In 2000, CADET was integrated and experimentally evaluated within the framework of the Integrated Course of Action Critiquing and Elaboration System (ICCES) experiment, conducted by the Battle Command Battle Laboratory – Leavenworth (BCBL-L)
In 2000-2002, DARPA applied CADET in the program titled Command Post of the Future ([[Command Post of the Future|CPoF]]) as a tool to
▲as a tool to provide a maneuver course of action. Under the umbrella of the CPoF program, CADET was integrated with the FOX GA system to provide a detailed planner, coupled with COA generation capability. In the same period, Battle Command Battle Lab-Huachuca (BCBL-H) integrated CADET with All Source Analysis System-Light (ASAS-L) to provide a planner for intelligence assets and to wargame enemy COAs against friendly COAs.<ref name="Ground-2002-Knowledge" /><ref name="Ruda-2001-Distributed">Ruda, Harald, Janet Burge, Peter Aykroyd, Jeffrey Sander, Dennis Okon, and Greg L. Zacharias. "Distributed course-of-action planning using genetic algorithms, XML, and JMS." In Battlespace Digitization and Network-Centric Warfare, vol. 4396, pp. 260-269. SPIE, 2001.</ref>
From 1996 through 2002, work on CADET was performed by the Carnegie Group, Inc., and supported by funding from the US Army [[CECOM]] (CADET SBIR Phase I, CADET SBIR Phase II and CADET Enhancements); DARPA (Command Post of the Future); and [[United States Army Training and Doctrine Command|TRADOC]] BCBL-H.<ref name="Kott-2002-Toward">Kott, Alexander, Larry Ground, Ray Budd, Lakshmi Rebbapragada, and John Langston. "Toward practical knowledge-based tools for battle planning and scheduling." In Proceedings of AAAI/IAAI, pp. 894-899. 2002. Online at
Line 35 ⟶ 33:
Taking this input, CADET automatically performed the following tasks (not sequentially):<ref name="Ground-2000-CADET"/><ref name="Kott-2005-Building">Kott, Alexander, Ray Budd, Larry Ground, Lakshmi Rebbapragada, and John Langston. "Building a tool for battle planning: challenges, tradeoffs, and experimental findings."Applied Intelligence 23, no. 3 (2005): 165-189.</ref>
* Planning and scheduling the
* Allocating tasks to
* Assigning suitable locations and routes
* Estimating
* Predicting enemy actions or reactions.
Line 66 ⟶ 64:
== Evaluation ==
Two evaluation experiments are described in literature. The first experiment called ICCES took three days.
The second experiment was reminiscent of [[Turing test]]. The experiment involved one user, nine judges (active-duty officers, mainly colonels and lieutenant colonels), and five scenarios obtained from several US Army exercises. For each scenario, experimenters obtained synchronization matrices that were produced in earlier exercises, typically by a team of four to five officers in three to four hours,
== Legacy ==
|