-
Colloquium Brief: Robotics and Military Operations – Kingston Conference on International Security
- 2015/08/14
- 再生時間: 1分未満
- ポッドキャスト
-
サマリー
あらすじ・解説
Dr. Robert J. Bunker Key Insights: A number of definitions for autonomous systems exist. One of the working definitions used at the colloquium was that in which their key feature is viewed as the ability of a given system to perform certain functions independently. Autonomous systems can support military operations by contributing to force protection as well as increasing mobility and improving lethality. The level of autonomy varies across platforms, but human oversight is still viewed as central to the deployment of autonomous systems in military operations. The definition of the term “robot,” on the other hand, is generally more agreed upon. It signifies a technology that requires sensors, artificial intelligence, and tools to carry out its tasks. However, much debate still exists concerning human decisionmaking requirements—vis-à-vis in the loop, on the loop, or outside the loop—with regard to armed robotic systems. This debate is framed by increasing the observe, orient, decide and act (OODA) loop requirements in warfare that surpass human cognition capacity (military necessity) weighed against the traditional view that only human beings should be responsible for the decision to take the lives of other human beings (ethics and morality). Limitations on autonomous robots are derived from mechanical (effectors), environment (hazards), and mission (objectives) variables. Simple robot use scenarios are far more favorable to autonomous systems because the complexity threshold is lower. Therefore, a sense-model-plan-act (SMPA) model is utilized to contend with the robotics complexity problem. This model is based on SMPA iterations and draws upon deliberate and reactive acts. This is why open air and open sea type environments are much easier at present to operate autonomous systems in than complex and populated urban environments. Emerging technologies that may overcome these limitations are based on probabilistic, networking, and parallel processing innovations. Our adversaries—nonstate and even potentially state based—will not be constrained by our democratic legal and ethical inhibitors in using these systems. Many of these will be enemies of all civilized peoples. For some years now, groups such as Hezbollah, al-Qaeda, Islamic State, and others have been utilizing unmanned aerial vehicles (UAVs) for terrorist plots, anti-personnel targeting, reconnaissance, and even propaganda purposes. As a result, we need to not only red team current and near future opposing force capabilities (to about 10 years out) but to develop countermeasures to opposing remote controlled and semi- and fully-autonomous systems (e.g., countercyber control targeting) and operating concepts (e.g. counterswarm). Continuity and change issues must also be considered when looking at these systems. Robotics and autonomous systems (RAS) will not be revolutionary in the sense that they will change the fundamental nature of war. Four constants in war exist; war is an extension of politics, it is profoundly human (fear, honor, and interest), it is characterized by uncertainty (complexity, interaction with enemies, and new technologies), and it is inherently a contest of wills. Still, change from an evolutionary perspective will be profound, similar to that of the revolution in military affairs (RMA). From the context of land warfare—that is, from a human muddy boots on the ground perspective—this change will be less significant, however, than it will be in naval, air, and space mediums. Robotics and autonomous systems technologies quite likely exist within the context of much greater revolutionary potentials intertwined with nonlethal targeting, new energy sources, advanced manufacturing (3-D and 4-D printing), networked and cloud based information and social media, the commercialization of security, and changing human values. U.S.