“It is currently estimated that AI and robotic systems will be ubiquitous across the operational framework of 2035.” (RAS MDO white paper 2018: 25)
Are we on the verge of a robotic revolution of military affairs? Will intelligent machines take control of the future battlefield and replace human warfighters? Recent advances in military neurotechnologies, robotics, and artificial intelligence (AI) have evoked the transgressive image of the ‘cyborg warrior’, a weaponized brain-computer network powered by AI and neurocognitive augmentation. In the wake of these emergent military technologies, some of our most fundamental assumptions and definitions of human intelligence, autonomy, and responsibility have been challenged. These concepts are central to our understanding of lawful and ethical conduct of war. They are also closely associated with human agency and the ability to make context-dependent decisions and critical evaluations in matters of life and death. The question that begs to be answered is whether – and how – these concepts can be applied to cyborg systems that, per definition, are not entirely human? What kind of military capacity is a cyborg warrior? A warfighter or a weapons system? A human or a machine? In the following, we will argue that the cyborg warrior is neither a human subject nor a piece of military hardware, but a heterogeneous assemblage – or rather a ‘nexus’ – of human and non-human capacities, transmitting and decoding streams of information in military battle networks. As such, we prefer to talk about cyborg and neurocognitive weapons systems, stressing the intrinsic entanglement of human and artificial intelligence challenging traditional human-machine distinctions and dichotomies.
Until recently, most people believed cyborg warfare to be purely science-fictional. Indeed, it is hard to imagine anything darker and more unsettling than a robotic army, enhanced by AI, commanding the future battlefield (Marks 2015; Scharre 2018). However, the application of cyborg technologies in military operations is not merely a futuristic fantasy. Today, advanced brain-computer interfaces are customized to the personal helmet (the ‘wearable cockpit’) used by F-35 fighter pilots and constitute standard applications in a variety of head-mounted displays (HMD) used by warfighters in both training and tactical combat scenarios. Recently, neuroscientific progress has been made in areas such as neurointelligence (intelligence fusion and predictive analytics), neurocognitive enhancement of warfighters (adaptive and interactive brain-computer interfaces), and neuroweaponry (target recognition, coordination, and control of weapons systems) (Giordano 2015), using AI as human decision support and cognitive enhancement.
However, as these emerging technologies evolve, growing concerns are raised about how they will affect the future of military command and control (C2), including the legal and ethical implications of weaponized neurocognitive systems. Since AI plays a significant role in advanced neuroweaponry, many of these considerations coincide with the insecurities introduced by the military use of AI and so-called ‘killer robots’1: Can autonomous robotic systems be held accountable for their actions? Will they be able to comply with legal and ethical conventions of International Humanitarian Law? Can they distinguish between combatants and non-combatants in a highly dynamic and cluttered operational environment? These issues remain contested2 and are, so to speak, ‘built’ into AI-enhanced cyborg weapons systems, challenging existing legal frameworks and moral values.
Addressing these concerns, we believe, demands a closer look at the problem these technologies are supposed to solve. As stated by neuroscientist James Giordano, the deployment of AI and neurocognitive systems in military battle networks should be seen as a response to the increasing amount of real-time data in the operational environment and to the challenges of an omnipresent information overload that exceeds the limitations of human cognitive capacities (Giordano 2015). In the following, we argue that the hybridization of human and artificial intelligence in cyborg weapons systems not only enhances the cognitive performance of warfighters. It also presents a way to leverage increased autonomy in intelligent and unmanned systems while simultaneously keeping humans ‘in the loop’, applying legal and ethical judgment and context-sensitive protocols of war in military operations.
More specifically, we argue that AI-enabled brain-computer networks have the potential to reconfigure the classical hierarchical structure of military command, prompting a shift to a more collaborative and flexible network command regime. This requires the practice of a new form of ‘network command responsibility’ and a reflexive form of ‘jurisprudence’ that determines questions of accountability and liability in military operations, such as: Which circumstances could warrant the use of neurocognitive weapons systems? And who can ultimately be held responsible for decisions and actions performed by cyborg warriors?
Given the relative nascence of neuroscience and technology, many of these issues are still speculative. Yet the pace of progress in AI-based neural interfaces and the ‘need for speed’ in military command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) will continue to push cyborg technologies and neuroethical considerations to the front.
In this article, we will address the transgressive nature of cyborg weapons systems and the way they shape the human perception and conduct of war. Adopting a socio-technical and constructivist approach to technological mediation and co-production of risk, we do not attempt to make a neuroscientific study. Neither do we develop technical solutions to human-machine symbiosis. Instead, we will introduce some basic concepts and definitions of cyborg and neurocognitive weaponry that will allow for critical debate on the emerging domain of ‘neurospace’ and human enhancement of warfighters.3 Although the underpinnings of this discussion are highly technical, the article turns to military studies rather than neurocognitive and computer sciences.
The empirical basis of our discussion is drawn from a variety of military technology assessments such as the NATO Science and Technology Organization (NATO STO) trends report 2020–2040 (2020) and the landmark report on Emerging Cognitive Neuroscience and Related Technologies (2008) published by the ad hoc Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Research in the Next Two Decades (National Research Council of the National Academy of Sciences). For the sake of simplification, we will refer to this text as the NAS 2008 report. To place ourselves firmly in the current operational framework of Multi-Domain Operations (MDO), we also draw extensively on the white paper on “Operationalizing Robotic and Autonomous Systems in Support of Multi-Domain Operations” by the Army Capabilities Center – Future Warfare Division (2018), hereafter referred to as the RAS MDO white paper. With this empirical ‘double vision’, we set out to explore the intersection of neuroscience, robotics, and military command.
The article falls in four parts: The first part presents some basic concepts and definitions of cyborg technologies and neuroweapons as part of an emerging neuroscientific security discourse. The second part of the article sets the general framework and context of multi-domain warfare in which these technologies are shaped and applied as military capabilities. The third part introduces the concepts of ‘collaborative risk mediation’ and ‘composite intentionality’ stressing the mutual entanglement and ‘interference’ of human and artificial intelligence in the emerging domain of neurospace. In the last part of the article, we address the urgent need of governing principles and guidelines, including the legal and ethical aspects of cyborg warfare. Thus, we call for interdisciplinary discussion of the emergent frontiers and practices of neurospace and the negotiation of neuroethical standards in the international security community. At the center of these discussions, we pose the question of ‘meaningful human control’ and responsibility in networked military command.
As a first step in our inquiry, we need to distinguish between neurotechnology, which is used to detect, affect, and target human brain activity (e.g.: improve, repair, degrade or manipulate cognitive skills), on the one hand, and AI, which is used in computers, sensors, and robotic systems, on the other hand. A ‘neural network’ is a specific form of AI, consisting of a set of algorithms resembling the working human brain. A ‘neuron’ in a neural network is a mathematical function that collects and classifies information according to a specific architecture (Chen 2019). Neurocognitive or cyborg networks, on the other hand, are hybrid systems of human and artificial intelligence, i.e. brain-computer networks that integrate the cognitive advantages of humans and computers. For many years, the two sciences, the science of the human brain and the science of AI, have developed side by side, mutually inspiring and informing each other. Now, the scientific exploration of neurotechnology and AI is rapidly converging and accelerating the development of neural feedback systems that allow a two-way communication stream between the human brain and the computer. The convergence of AI and neurotechnology and the implications of integrating, not just combining or ‘teaming’ human and machine cognition,4 is the focus of our interest. Humans and computers work together everywhere. This is not new. However, until recently they have done so as separate entities. This separation is beginning to erode, as ubiquitous AI and neurotechnological advances have made the distinction between human and machine cognition unclear and in some cases even obsolete. When we refer to ‘cyborg and neurocognitive weapons systems’, and not just one or the other, it is precisely because we want to stress this increasing interference of human and non-human cognition, which goes way beyond – and has to be distinguished from – other hybrid technologies such as bionic limbs and advanced hearing or visual aids.
For this same reason, it is important not to confuse the notion of the cyborg warrior with the concept of the ‘centaur warfighter’ (Scharre 2018: 321), which is often used as a metaphor for human-machine teaming. The two concepts are closely related, but not synonymous. This distinction can be expressed as the difference between integration and automation of machine intelligence, perception, and reasoning. Whereas centaur human-machine teaming consists of humans plus machines, with machines performing clearly demarcated automated functions, the cyborg warrior functions as a neurally enhanced and integrated system architecture,5 merging human and machine cognition. Centaur human-machine teaming does not necessarily imply cognitive or sensory enhancement of the human operator. Human and machine cognition is not neurally integrated. Instead, humans and machines perform different role-specific tasks that are largely based on predetermined decision models where the machine’s role is conditioned by one or more rule sets (Murray & Yanagi 2015: 17).
As opposed to centaur human-machine teaming, cyborgs have no preprogrammed role specifications but adapt continuously to shifting situations and demands in the operational environment. According to Kline and Clynes (1961), such systems can be regarded as ‘cybernetic organisms’ (i.e. cyborgs) in that they entail both natural and artificial systems that are functional, portable, and/or biologically integrated (Wurzman & Giordano 2015: 90). As such, cybernetic and cyborg systems can be seen as “sophisticated distributed human-machine networks, such as integrated software or robotic augmentations to human-controlled activity, that would fuse and coordinate the distinct cognitive advantages of humans and computers” (Wurzman & Giordano 2015: 90). Consequently, cyborg technologies used in a networked risk environment will “reflect a combination of autonomous initiative and original problem solving by both human and machine. This means shared agency and responsibility in military decisions” (Murray & Yanagi 2015: 17).
The attribution of shared agency and responsibility to humans and machines is central to the definition of cyborg and neurocognitive weapons systems and demarcates a shift from automated decision support to collaborative information and risk management, with human and machine intelligence mediating and co-shaping the perception, organization, and distribution of risk. The advantage of such systems is increased flexibility and accountability insuring human judgment and responsibility over engagements while simultaneously leveraging the precision and speed of AI. This becomes particularly urgent when cyborg technologies are used as offensive weapons systems (Murray & Yanagi 2015: 17).
Whenever neurocognitive systems are used as weapons (either defensive or offensive) against an opponent, they are broadly classified as ‘neuroweapons’. Traditionally, a weapon is defined as “a means of contending against an other” and “something used to injure, defeat, or destroy” (Merriam-Webster Dictionary 2020). As stated by neuroscientists Rachel Wurzman and James Giordano, both definitions apply to neurotechnologies used as weapons in intelligence and/or defense scenarios:
“Neurotechnology can support intelligence activities by targeting information and technology infrastructures, to either enhance or deter accurate intelligence assessment, the ability to efficiently handle amassed, complex data, and human tactical or strategic efforts. The objectives for neuroweapons in a traditional defense context (e.g. combat) may be achieved by altering (i.e. either augmenting or degrading) functions of the nervous system, so as to affect cognitive, emotional and/or motor activity and capability” (Wurzman & Giordano 2015: 80–81).
However, neuroweapons are inherently ambiguous and elusive systems that defy easy explanation and definition. A clear-cut and authoritative definition does not exist, and differences about core components, structure, design, and purpose are negotiated by different security actors in academia, industry, military, and national civil services (McCreight 2015: 117). Moreover, a significant problem is the “amount of pseudoscientific information and journalistic oversimplification related to cognitive science” (NAS 2008 report). Consequently, definitions of neuroweaponry tend to be either too broad or too narrow to be useful for critical evaluation. An attempt to launch a comprehensive definition has been made by intelligence analyst Robert McCreight (2015), who proposes that:
“Neuroweaponry encompasses all forms of interlinked cybernetic, neurological, and advanced biotech systems, along with the use of synthetic biological formulations and merged physiobiological and chemical scientific arrangements, designed expressly for offensive use against human beings” (p. 117).
The problem here is that the definition itself becomes so abstract that it needs translation to be applicable. Instead, for our purpose, we will use a more pragmatic definition: Neuroweapons include any kind of neurotechnological agent, drug, or device designed to either enhance or deter the cognitive performance of warfighters and target intelligence and command structures as both non-kinetic and kinetic weapons. As such, they can be used to influence, shape, augment, or restrict human perception and decision-making. With these generic properties as a very broad characterization of neuroweaponry, we can classify the cyborg warrior as a specific kind of neurocognitive weapons system, i.e. a certain class of neuroweapons, using AI as cognitive enhancement in hybrid brain-computer networks.
This preliminary outline of cyborg and neurocognitive weaponry is rather far from the popular Terminator sci-fi edition. In other words, they are not autonomous ‘killer robots’. Instead, they appear in the shape of networked assemblages with multiple operators, sensors, computers, and platforms combining cybersystems and brain functions. Thus, the cyborg warrior is neither a human subject nor an autonomous robot, but an augmented and distributed system architecture, a hybrid man-machine network that integrates artificial and human cognition in military mission planning and control. This means that a new domain beyond cyber and space must be added to the existing definition of multi-domain battlespace. Following McCreight (2015), we adopt the concept of neurospace6 to demarcate the emergence of a new strategic frontier of multi-domain warfare performed by networked humans and computers. As he puts it, “the new battlespace is the brain itself” (McCreight 2015: 125).
With this conceptual framework, we wish to emphasize the transgressive nature of cyborg systems and the way they are presented as a matter of national security. The security discourse of neuro-enhanced weapons systems concerns both the opportunities and the risks of disruptive neurotechnologies. Different security actors hold different and competing views on neuroweapons. Some scholars warn against the dangers of weaponized and ungoverned cybernetic systems that target the human mind (Farwell 2015). Others stress the benefits of neuro-enhanced capabilities that maximize soldier performance in intelligence operations, support military decision-making, and increase return of investment in unmanned, AI-based, and robotic systems (Stanney et al. 2015; NAS 2008 report). This way, we find two neuroscientific narratives securitizing the realm of the cyborg warrior: On the one hand, we find a narrative of the ‘dark side’ of unregulated ‘neurowarfare’ with globally networked, self-learning machines taking control over human life and death. This could be called the ‘neuroskeptic’ narrative (Benanti 2015). On the other hand, we find a narrative of AI-enabled, cognitive augmentation and ‘decision superiority’ that strengthens situational awareness, enhances warfighter performance, and integrates effects across multiple domains of operation. This could be called the ‘neurooptimistic’ narrative. Both narratives draw on and contribute to the emergence of a neuroscientific network discourse that securitizes the boundaries of neurospace. However, these boundaries are inherently unstable and constantly renegotiated and reconstructed as temporary regulations and arrangements of the battlefield. These arrangements include both people and technologies, military doctrines, legal and ethical conventions, technical specifications, and political programs that perform and co-shape the neurospace as a distinct domain of operation.
This understanding radically alters the way we normally perceive the relationship between humans and technology and challenges the existing norms and boundaries of warfare. Instead of using a classical binary distinction between humans and machines, we see them as collaborative risk mediators, co-shaping and co-performing mission planning and execution. This involves a shift from a ‘human-centric’ understanding of intelligence and agency to a distributed (non-hierarchical) network model acknowledging the interconnectedness of humans and technology (Oie & McDowell 2015: 34).
According to Wurzman and Giordano (2015), there is significant utility for weaponized neurotechnologies and cyborgs in contemporary warfare, where threat environments are “asymmetric, amorphous, complex, rapidly changing, and uncertain” and “require greater speed and flexibility” (p. 81). This view is supported by the general characterization of multi-domain warfare and the pursuit of ‘game-changing’ military technologies needed to insure success on the battlefield. Thus, it is commonly maintained that battlefield success depends on the ability to operate in an increasingly networked, accelerated, and information-intensive security environment (Oie & McDowell 2015: 35–36). Moreover, it is widely believed that the proliferation and use of information technologies on the battlefield makes it vital to maintain “superiority in the generation, manipulation and use of information”, i.e. ‘information dominance’, in order to secure ‘decision superiority’– in other words, the process of making decisions better and faster than adversaries (Oie & McDowell 2015: 36).
This information-driven approach to the future battlefield is expressed in the RAS MDO white paper (2018), stating that “[t]he future force requires the ability to collect, assess, analyze, and fuse data through the employment of AI” (RAS MDO white paper 2018:v). Referring to the MDO concept, the white paper describes how advanced networks of humans and intelligent machines can be used to outmaneuver enemy forces and counter their Anti-Access/Area Denial (A2/AD) capabilities across domains (land, air, maritime, space, and cyberspace), the electromagnetic (EM) spectrum, and the information environment. The key to successful battle management in MDO, according to the white paper, is networked artificial and human intelligence:
“Artificial intelligence agents and algorithms will enable future force operations by processing, exploiting, and disseminating intelligence and targeting data. Operating forces will use AI to cue sensors and integrate cross-domain fires; reduce a staff’s cognitive load while simultaneously enabling a commander’s decisions at the pace of battle; and manage airspace, networks, and robotic and autonomous systems” (RAS MDO white paper: 35).
In support of this view, organizations like the Defense Advanced Research Project Agency (DARPA) have initiated neurotechnological research projects that examine advanced signal-processing techniques for real-time coding of neural patterns in order to improve military decision-making and predictive analytics (Farwell 2015: 134). This includes neural interfaces and sensor designs that interact with the central and peripheral nervous system using nanoneuroscience, neuroimaging, and cyber-neurosystems (Krishnan 2016: 15). These technologies provide techniques and tools that are designed to assess, access, and target neural systems (Giordano 2015) and can be used to affect the cognitive, emotional, and behavioral aspects of human performance in military operations.
The general assumption in these assessments is that neurotechnological progress will gain significant importance and impact as a ‘force multiplier’ in the future battlefield, “in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone” (Emondi 2019; NATO Science and Technology Organization 2020: 96–97). Similarly, it is concluded that intelligent, distributed human-machine networks will be able to assist human operators in advanced sensor grids and intelligence analyst workbenches, coordination of joint or coalition operations, logistics, and information assurance (NAS 2008 report: 143). This would allow future forces to understand the operational environment in real time, increase speed and situational awareness, lighten the warfighters’ cognitive workload, leverage autonomous and robotic systems, and converge capabilities across all domains of operation.
These assessments are key elements in the neuroscientific security discourse and draw heavily on the neurooptimistic narrative of ‘decision superiority’ in multi-domain warfare. However, looking at the flip-side of the neuroscientific imaginary, we find a competing narrative stressing the vulnerability of cyborg systems to cyberattacks, intrusion, and manipulation of information by enemy forces, threatening core functions of the network as well as the security of the human operators. How can they be protected from ‘neural malware’ infecting the network? If humans and computers are neurally networked, can human operators be ‘hacked’ or even controlled by enemy governments, terrorists, or cyber criminals? What would a ‘neural attack’ look like, and how could it be detected? Would we be entering an era of cyborg flash wars, occurring at machine speed, far beyond the limits of human perception? These are unsettling – and unanswered – questions raised by neuroskeptics and opponents of neurowarfare.
To get a better understanding of these challenges, we will have to take a closer look at the human-machine interface and the way human and artificial intelligence mutually shape and mediate the perception of the operational environment. More specifically, we want to explore how these transgressive technologies mediate human and machine cognition, and how they shape the realm of the cyborg warrior.
Following the seminal report on Emerging Cognitive Neuroscience and Related Technologies (NAS 2008 report), the basis of neurocognitive technologies and brain-computer networks is the capture and visualization of various forms of energy emissions from the working brain. This visualization is achieved by means of functional neuroimaging devices,7 i.e. devices that present digital images of neural activity in the human brain, e.g. fMRI or EEG (see Figure 1).
Neuroimagery can detect and classify human cognitive states such as fatigue or mental and sensory overload in real time by measuring changes in brain activity. By visualizing different types of brain activity, neuroimaging technologies offer different windows onto complex neural processes, often with the goal of understanding the relationship between regional neural activity and specific tasks, stimuli, cognition, and behavioral patterns (NAS 2008 report: 51). The detection, classification, and interpretation of specific patterns of neural activity can be conducted by machine learning through advanced signal processing and pattern recognition allowing a bidirectional transmission of information between human and machine. While the development of neuroimaging technologies and self-learning algorithms forms the basis of advanced brain-computer interfaces and augmented sensory capacities (e.g. visual and auditory enhancement), direct neural enhancement of the human brain is still at an early stage of development and unlikely to be available before 2050 (NATO STO 2020: 96). Nevertheless, according to NATO’s tech trends 2020–2040 report, cognitive enhancement based on bidirectional data transfer and mesh networks are a real possibility (NATO STO 2020: 96). As recent developments in DARPA’s Augmented Cognition Program show, functional neuroimaging technology combined with machine learning and AI can be used to control and communicate with unmanned and remotely piloted systems, allowing efficient searching and encyclopedic access to information (NATO STO 2020:: 52). This requires an efficient process of neural decoding and translation between the human brain and the computer either via invasive neurotechnological implants (nanotransducers) or non-invasive external devices (see Figure 2).
As an example, DARPA has been working on non-invasive brain-computer interfaces that use the human visual system as the input device to a computer system to increase the speed of data processing in visual search mode (NATO STO 2020). In DARPA’s Next-Generation Non-Surgical Neurotechnology (N3) Program, the goal is to create reliable neural interfaces without the need for surgery (Emondi, 2020). Instead of invasive brain implants, the brain-computer interface is designed as a wearable, head-mounted device (cap, helmet, or visor) that transmits electrical signals from the brain to the computer and back to the operator in a closed-loop,8 bidirectional feedback system (see Figure 3). The brain signals are picked up by sensors in the wearable interface, analyzed and translated by AI, and sent back as an output signal to the human operator, for instance as a list of alternative options to engage a target or coordinate data streams from other platforms or networked weapons systems. Ultimately, it is envisioned that adaptive neuro-feedback systems could help to develop and evaluate targeting data, create layered options, enable cross-domain synergy, and exploit opportunities in time-sensitive environments. At this stage of development, these interfaces would primarily be suited for analysts and operators in military reach-back facilities and headquarters that provide a relatively stable and controlled environment. However, a better understanding of closed-loop and adaptive neuro-feedback systems will be necessary to improve systems design and maximize human performance while simultaneously avoiding mental or cognitive overload in operators and intelligence analysts.
According to several scholars, this requires a shift from a human-centric model of intelligence and agency to a network model that involves the interconnectedness of humans and intelligent systems in advanced AI-based networks (Oie & McDowell 2015; NAS 2008 report: 94). Whereas early intelligent systems were like disembodied entities (often caricatured as a floating brain in a glass jar hooked up with a bunch of electric wires), networked cyborg systems should be seen as embodied technologies that sense and interact with the environment in numerous ways through both human and non-human sensors and operators. This ‘embodiment’ of networked technologies is perhaps the most radical and transgressive property of cyborg weapons systems. It is also the distinct quality that makes cyborg weapons systems something else and far more than just another piece in the military toolkit. They cannot be adequately understood as isolated components or pieces of military equipment. More profoundly, we contend, they can be understood as neurocognitive assemblages that continuously translate and mediate human and machine perception and agency.
The mediation of human and machine perception has been described by neuroscientists Dylan Schmorrow and Amy Kruse as ‘closed-loop augmented cognition’, based on Human-in-the-Loop System Adaptation (Schmorrow & Kruse 2004) – or simply neuro-feedback. In adaptive closed-loop systems, the brain-computer feedback process starts with the operator engaging in a cognitive task while receiving possible stimuli (e.g. visual or sensory input). As shown by Marcel van Gerven et al. (2009), the neural activity of the human operator is detected by sensors and processed by the computer (see Figure 3). An outcome is predicted by AI, which generates and transmits an output signal directly to the brain or via an external interface. The output signal can be presented in multiple forms and modalities such as text, auditory input, motor commands (e.g. controlling prosthetics or unmanned systems), or graphical and vibrotactile representations of brain activity (van Gerven et al. 2009: 5). The decision cycle is closed by the operator perceiving the output, which allows an evaluation and adaptation of the feedback process. While iterating through the cycle, both the operator and the computer may learn to adapt, thereby increasing the cognitive performance of the overall system (van Gerven et al. 2009: 2).
This adaptive system approach blurs the distinction between human and artificial intelligence and attributes agency and decision authority to both humans and machines in hybrid system architectures (Murray & Yanagi 2015: 17). This includes complex collaborative tasks such as target recognition, threat analysis, mission planning, and intelligence fusion.
In contrast to stand-alone AI and machine learning algorithms, adaptive brain-computer networks with humans ‘in the loop’ have the ability to respond to unforeseen changes and exercise discretionary judgment in mission planning and control of operations. This is essential in complex and time-sensitive tasks such as dynamic targeting,9 where the prioritization of targets can change in an instant depending on operational circumstances. This form of technological mediation and agency not only supports human decision-making. It reshapes and accelerates the entire ‘OODA loop’10 in the military decision cycle. More specifically, the network approach to technological mediation and agency recognizes that integrated artificial and human cognition is crucial for the conduct of missions in which functions of speed, amount of information, and synchronization might overwhelm human decision-making (RAS MDO white paper 2018: 2). Thus, neurocognitive weapons systems and cyborg technologies are not just cooperative in the sense of ‘team members’ or robotic assistants interacting with and enabling human operators to perform ‘dirty, dull, and dangerous’ tasks. They are not just ‘intelligent tools’ projecting human intention and agency. Rather they should be understood as collaborative risk mediators that actively co-shape and mediate the perception, evaluation, and communication of risk in MDOs. In this perspective, risk management is a result of shared human-machine cognition and coproduction of critical decisions in military C4ISR networks. This means that decision-making is seen as a joint effort of human beings and intelligent technologies. In the words of Peter Paul Verbeek, it is an:
“… inherently hybrid affair, involving both human and non-human intentions, or better ‘composite intentions’ with intentionality distributed over the human and non-human elements in human-technology-world relationships. Rather than being derived from human agents, this intentionality comes about in associations between humans and nonhumans. For that reason, it could be called ‘hybrid intentionality’” (Verbeek 2009: 11).
This leads to something beyond a conventional brain-computer interface – the goal is not merely to control external devices by interfacing them with the brain (NAS 2008 report: 89), but more profoundly to merge human and machine cognition. However, attributing intentionality and agency to hybrid networks rather than individual human operators introduces a range of questions concerning command responsibility, decision authority, and transparency, such as: Under what circumstances would the use of neurocognitive weapons be justified? What rules define the cyborg warrior as a legal subject? If intentionality and hence responsibility is distributed in hybrid network arrangements, who can be held legally and ethically accountable in case of misconduct or malfunction of the system? What military doctrines, policies, and agreed protocols would provide a governing framework for the use of weaponized neurotechnology? (McCreight 2015: 126; Farwell 2015: 156). To frame it differently: Cyborg and neurocognitive weapons systems do not only mediate the perception and management of risk. They also co-produce a new domain of risk, situated and performed at the intersection of human and artificial intelligence, demarcating the frontiers of neurospace. This is what we have characterized as a transgressive property of cyborg weapons systems, challenging existing international laws and conventions of war.
While advanced brain-computer networks could enable dramatic improvements in mission performance of both human operators and autonomous machines, such hybrid system architectures will require an adjustment of existing norms and categorizations of what constitutes a moral agent and a legal subject in military operations. This poses significant challenges to existing doctrines and conventions of war that must be addressed before the fielding of cyborg and neuroweaponry. Since they contain many of the defining features of robotic and autonomous weapons systems, they also contain many of the same potential problems (Murray & Yanagi 2015: 18). Much of the difficulty encountered in the ongoing controversy of robotic and autonomous weapons systems is centered on the key questions of command responsibility, transparency, and the ability of the system to explain recommendations:
“Artificial intelligence support technologies must be able to explain recommendations, and in the case of autonomous systems provide data, that explains decisions. System integration, interchangeability, and communication require that the Joint Force define standards for architecture, language, and protocols between robotic and autonomous systems, platforms and payloads” (RAS MDO white paper 2018: v).
If military personnel are asked to adopt AI-enabled systems, they must be able to trust that these systems work as intended. Until now, one of the main obstacles to building trust in and exploiting the potential of autonomous weapons systems has been the lack of transparency in AI-based processes and semantics. As remarked by the NATO Sub-Committee on Technology Trends and Security (2019): “Today, it is still very difficult and sometimes impossible to understand if AI systems draw the right conclusions and even how they arrive at those conclusions”(p. 7). The systems often appear as ‘black boxes’ to researchers and operators. “Algorithms sometimes produce ‘odd’ results, solve problems in a counterintuitive or false manner, and sometimes even ‘cheat’” (Sheppard et al. 2018, as cited in Tonin 2019). Even system engineers and programmers are unable to fully explain why advanced AI algorithms choose some options and not others, and why they come up with the solutions they do. Without transparent and ‘explainable AI’, responsibility will be difficult to place, and trust in the system will be hard to attain.
The same obstacles emerge with the use of AI-enabled cyborg and neurocognitive weapons systems. The problem lies in the transgressive nature of cyborg systems that are ruled not by human or machine intent, but by ‘hybrid intentionality’ distributed among multiple entities in the battle network. The notion of hybrid or composite phenomena does not sit well with the binary ‘either-or logic’ in military and legal terminology. Hence, the question remains: Who – human or machine – is ultimately responsible for decisions made or actions taken during mission execution with a neuro-enhanced brain-computer architecture? (Murray & Yanagi 2015: 19).
To answer this question and bridge the so-called ‘responsibility gap’, efforts have been made by the international security community under the auspices of the UN Convention on Certain Conventional Weapons (UN CCW) to define a set of governing principles for the use of robotic and autonomous systems in military operations. Although progress has been slow due to conflicting security interests and lack of a clear definition of the word ‘autonomous’, consensus has been achieved on the somewhat vague notion that robotic and autonomous weapons systems must incorporate ‘meaningful human control’ to be lawfully deployed (Horowitz & Scharre 2015).
The exercise of ‘meaningful human control’ is closely associated with compliance to the legal and ethical conventions of International Humanitarian Law. This includes the principles of distinction (Article 8(2)(b)(i)) and proportionality (Article 8(2)(b)(iv)), i.e. the ability to distinguish between combatants and non-combatants and avoid collateral damage and civilian injury that would be clearly excessive compared to the anticipated military advantage. It has repeatedly been stated by critics such as Human Rights Watch and the Campaign to Stop Killer Robots that compliance to these principles requires human judgment and the capacity to adapt ethical considerations to a complex and unpredictable risk environment. Since robotic and autonomous machines do not possess the critical capacity of human judgment and adaptability to unforeseen situations, and since they cannot be held legally and ethically accountable for their actions, they are inherently unlawful weapons, it is argued (Sharkey 2008).
The objection relies on two basic assumptions: 1) that autonomous machine cognition and behavior is uncontrollable once activated, and 2) that the use of such machines violates International Humanitarian Law. Both assumptions, we argue, can be refuted by a network perspective on humans and intelligent machines as collaborative risk mediators. In other words, the responsibility gap can be bridged in cyborg and neurocognitive battle networks keeping the human operator ‘in the loop’ applying legal and ethical judgment (i.e. ‘meaningful human control’) in all phases of the military decision cycle. What could be dubbed as ‘cyborg ethics’, however, requires a turn from a strict hierarchical model of military command to a network approach that recognizes the profound entanglement of human and machine perception and mediation of risk. Additionally, a concept of ‘network command responsibility’ must be formed to comply with legal and ethical criteria of International Law. To address these central aspects of cyborg ethics, we will draw on two equally important interpretations of the network discourse that have been introduced in recent military and legal discussions: the key concepts of ‘network command’ (King 2019) and ‘network share liability’ (Teubner 2004). Both concepts are needed to avoid ‘legal black holes’ and to bridge the responsibility gap in cyborg and neurocognitive weapons systems.
First, we will examine the concept of network command as a collaborative approach to human-machine decision-making and agency. In his international bestseller, Command. The Twenty-First-Century General, Anthony King (2019) describes what he characterizes as a paradigmatic shift from a classical hierarchical command regime to a collective command regime. According to King, the collective command regime reflects the development of a multi-dimensional and information-intensive operational environment that puts the traditional hierarchical command and control structure under pressure. The ambiguous and rapidly changing battlefield challenges the legal-rational and often cumbersome processes of the bureaucratic order. In a state of near-peer competition, where adversaries have successfully deployed their A2/AD capabilities, a new type of distributed and collaborative mission command is required to exploit cross-domain synergies, maximize effect, and outperform competitors:
“Complex environments require different leadership and decision-making techniques than succeeding in simple or complicated environments … experimentation and collaboration are keys to success in the complex domain … To enable collaboration, leaders and staff must be capable of forming more flat, distributed organizations in addition to traditional hierarchical models” (Klein 2017, as cited in King 2019).
The NATO Strategic Foresight Analysis (SFA) 2017 supports this view. In the SFA report, the shift to a more collaborative and innovative organization model that replaces traditional ‘stove-piped working practices’ is seen as a requirement in a future security environment characterized by growing interconnectedness, disruptive changes, and rapid technological advancements:
“This will require a shift from an organizational culture that takes an incremental approach, has stove-piped working practices and waits for greater clarity, to one that has a more collaborative approach that supports bold and innovative decisions” (NATO 2017: 7).
Similarly, in the RAS MDO white paper, a more distributed and collaborative network approach to command and control is seen as necessary to maintain situational awareness and a Common Operational Picture “that captures all systems in real time and allow for mission command of multiple manned and unmanned systems” (RAS MDO white paper 2018: 27).
More specifically, we argue, the turn toward a network command regime can be seen as a response to the proliferation of embedded AI in decision support technologies, unmanned systems, wearable and portable devices, as well as adaptive, closed-loop brain-computer interfaces (Skinner et al. 2014). It reflects the growing influence of pervasive computing and augmented human-machine performance systems in a risk environment where “the increased number of sensors and platforms, all processing and transmitting high volumes of diverse data at tactical speeds, exceeds human cognitive capabilities in time-sensitive environments” (RAS MDO white paper 2018: 30). Moreover, the network command regime is inextricably linked with the changing nature of a contested battlespace that “requires the capability to execute tactical, operational, and strategic communications and data sharing beyond-line-of-sight through a secure, autonomous, self-healing and intelligent network” (RAS MDO white paper: v).
As the network discourse has gained growing influence in military strategy and doctrine development, the question of legal and ethical responsibility has been pushed to the front: How is law to respond to a network command regime where decisions are shaped and distributed between multiple (human and non-human) operators, sensors, and platforms? How can we avoid legal black holes and assign responsibility to cyborg and neurocognitive weapons systems?
According to sociologist and legal scholar Günther Teubner (2004), the attribution of responsibility to complex collaborative networks challenges the binary logic of legal semantics and creates a general state of ‘irritation’ or ‘hybridization’ of law. Legal doctrine cannot simply adopt the term network command, but must itself reconstruct a legal definition out of its own internal logic (Teubner 2004: 10). As a response to the network irritation, a new legal construct of network share liability emerges in hybrid law, distinguishable from both individual (contractual/market) liability and collective (corporate/hierarchical) liability. The construct of network share liability is especially suited to situations where the contribution of networked operators to mission execution cannot be traced back to individual nodes but only to the network itself (Teubner 2004: 21):
“The appropriate form of liability is a decentralized, multiple and collective combination of network liability and the liability of nodes who have in fact participated in the operation under scrutiny. In contrast to comprehensive collective liability in the case of formal organizations, this leads to a re-individualization of collective liability within networks” (Teubner 2004: 21).
The legal solution of network share liability is to allow a ‘double attribution’ of responsibility to individual operators and the network as a whole; one and the same transaction is doubly attributed; to individual network nodes and to the overall network (Teubner 2004: 20). No decision or course of action is seen as an isolated event, but always as part of a collective arrangement of humans and technologies.
With the legal construct of network share liability, the practice of hybrid law becomes responsive to the transgressive characteristics of cyborg weapons systems and the practice of a network command regime. As pointed out by legal scholar Inger Johanne Sand (2012), hybrid law is a response to a growing demand for flexible and many-dimensional organizing concepts (p. 193):
“In order to be relevant and effective law is using networks instead of only formal organizations, soft law instead of hard law, preambles and purpose-statements instead of formally binding obligations, references to knowledge and technologies instead of specific legal semantics” (Sand 2012: 195).
The turn toward a network paradigm in military and legal semantics, we argue, mirrors the increasingly hybrid arrangements and practices of humans and intelligent technologies in advanced information and communication networks. As such, it forms the basis of a new military and legal framework of cyborg ethics. The defining characteristic of cyborg ethics is the legal construct of double attribution of responsibility, i.e. the simultaneous attribution of responsibility to individual network nodes and the collective command network, combining principles of different and often contradictory legal regimes of collective and individual liability: “Instead of the binary distinction legal/non-legal there are oscillations between different legalities … What is legal will then often be a close oscillation between contradictory legal norms and different values” (Sand 2012: 190).
Thus, cyborg ethics involves the emergent practice of a more reflexive and context-sensitive form of jurisprudence, where obligations and prescriptions from multiple legal regimes interact to form a complex web of international governance (Crootof 2015: 98). As a new kind of network risk management, the jurisprudence of cyborg ethics requires the ability of military officeholders and warfighters to coordinate and translate between a plurality of competing norms, standards, and values in military doctrines, legal conventions, and political programs. In contemporary accelerated and hybrid battlefields, this translation takes place in the interface of humans and intelligent technologies shaping critical decisions in the full spectrum of operations (Nørgaard 2017: 16).
Adopting the notion of shared human-machine agency and responsibility in cyborg and neurocognitive weapons systems allows us to transcend the classical human-centric and hierarchical order of military command and organization. To avoid so-called legal black holes and to bridge the responsibility gap in cyborg weapons systems, we propose a network approach to human-machine interaction and risk management that recognizes the intrinsic entanglement and co-constitution of human and machine intelligence. Formal organizational structures and legal orders are mandatory, but they must be coupled with a more reflexive and context-sensitive form of jurisprudence, i.e. the ability to evaluate, coordinate, and translate between multiple legal, ethical, and political definitions of transparency, accountability, and meaningful human control. In cyborg systems and augmented brain-computer interfaces, decision-making and hence risk management should be viewed as a joint effort of human operators and intelligent machines. Understanding how the realm of neurospace and cyborg warfare can be demarcated and regulated then requires interdisciplinary experimentation and collaboration between military operators, system engineers, lawyers, and policy makers. As robotic and cyborg weapons systems proliferate, general guidelines and rules of engagement will help to build trust in human-machine interaction and support decision-making in a contested and increasingly networked battlespace. Promoting a timely and prudent discussion of cyborg ethics and network command is not just a futuristic endeavor. It is a matter of urgency that must be considered by governments as well as the international security community in terms of reducing vulnerabilities and enhancing joint warfighting capabilities.
3We distinguish between treatment and enhancement of cognitive capacities. Where neurocognitive treatment is used to repair or replace lost or damaged cognitive and/or motor skills, neurocognitive enhancement is used to improve cognitive skills in healthy individuals.
4Machine cognition is the capability of a technical device, sensor, or system to sense and perceive its environment, using artificial intelligence (e.g. machine learning) to process inputs and to produce recommendations for future actions based on this analysis.
5This kind of ‘networked’ neuroenhancement differs significantly from the cognitive enhancement induced by different types of neurotoxins. Although neurotoxins are also broadly classified as neurotechnologies, they do not involve cyborg qualities such as human-machine perception, decision-making, and agency.
6Steve Mann was among the first to capture the hybrid nature of neurospace defining the criteria for devices to be subsumed into the human being’s ‘eudemonistic space’ (the space where the device seems to be part of the person). See Mann (1997).
8I.e. the loop from the measurement of brain activity, classification of data, feedback to the subject, and the effect of feedback on brain activity. See van Gerven et al. (2009).
9Dynamic targeting consists of six distinct steps: find, fix, track, target, engage, and assess (F2T2EA). Retrieved 24.06.2020 from: https://www.doctrine.af.mil/Portals/61/documents/Annex_3-60/3-60-D17-Target-Dynamic-Task.pdf.
10The ‘OODA loop’ refers to the four phases of the military decision cycle: observe, orient, decide, act. See Boyd (1995).
The authors have no competing interests to declare.
Army Capabilities Integration Center – Future Warfare Division. (2018). Operationalizing Robotic and Autonomous Systems in Support of Multi-Domain Operations [White Paper ]. Retrieved 10.03.2020 from https://info.publicintelligence.net/USArmy-RoboticAutonomousMultiDomainOps.pdf
Benanti, P. (2015). Between Neuroskepticism and Neurogullibility: The Key Role of Neuroethics in the Regulation and Mitigation of Neurotechnology in National Security and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 217–225). Boca Raton: CRC Press.
Boyd, J. R. (1995). The Essence of Winning and Losing. Retrieved 24.06.2020 from https://web.archive.org/web/20110324054054/http:/www.danford.net/boyd/essence.htm
Chen, J. (2019). Neural Network - Investopedia homepage. Retrieved 14.04.2020 from https://www.investopedia.com/terms/n/neuralnetwork.asp
Crootof, R. (2015). The Varied Law of Autonomous Weapon Systems. In A. P. Williams & P. Scharre (Eds.), Autonomous Systems: Issues for Defence Policymakers (pp. 98–126). Norfolk, Virginia: Allied Command Transformation (ACT).
Emondi, A. (2019). Six Paths to the Nonsurgical Future of Brain-Machine Interfaces – DARPA homepage. Retrieved 31.03.2020 from https://www.darpa.mil/news-events/2019-05-20
Emondi, A. (2020). Next Generation Non-Surgical Neurotechnology – DARPA homepage. Retrieved 31.03.2020 from https://www.darpa.mil/program/next-generation-nonsurgical-neurotechnology
Farwell, J. P. (2015). Issues of Law Raised by Development and Use of Neuroscience and Neurotechnology in National Security and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 133–165). Boca Raton: CRC Press.
Giordano, J. (2015). Neurotechnology, Global Relations, and National Security: Shifting Contexts and Neuroethical Demands. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 1–10). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-2
Horowitz, M. C., & Scharre, P. (2015). Meaningful Human Control in Weapon Systems: A Primer. Project on Ethical Autonomy No. 1. Washington, DC: Center for a New American Security. Retrieved from https://www.files.ethz.ch/isn/189786/Ethical_Autonomy_Working_Paper_031315.pdf
King, A. (2019). Command – The Twenty-First-Century General. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/9781108642941
Kline, N. S., & Clynes, M. E. (1961). Drugs, Space, and Cybernetics: Evolution to Cyborgs. In B. E. Flaherty (Ed.), Psychophysiological aspects of space flight (pp. 345–371). New York: Columbia University Press.
Krishnan, A. (2016). Military Neuroscience and the Coming Age of Neurowarfare. London: Routledge. DOI: https://doi.org/10.4324/9781315595429
Mann, S. (1997). Wearable Computing: A first step toward personal imaging. IEEE Computer, 30(2), 25–32. DOI: https://doi.org/10.1109/2.566147
Marks, J. H. (2015). Neuroskepticism: Rethinking the Ethics of Neuroscience and National Security. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 179–198). Boca Raton: CRC Press.
McCreight, R. (2015). Brain Brinkmanship: Devising Neuroweapons Looking at Battlespace, Doctrine and Strategy. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 115–132). Boca Raton: CRC Press.
Merriam-Webster Dictionary. (2020). Definition of Weapon. Retrieved 22.06.2020 from https://www.merriam-webster.com/dictionary/weapon
Murray, S., & Yanagi, M. A. (2015). Transitioning Brain Research: From Bench to Battlefield. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 11–22). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-3
National Research Council of the National Academy of Sciences. (2008). Emerging Cognitive Neuroscience and Related Technologies. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/12177
NATO. (2017). Strategic Foresight Analysis. Retrieved from https://www.act.nato.int/images/stories/media/doclibrary/171004_sfa_2017_report_hr.pdf
Nørgaard, K. (2017). A Study of Military Technopolitics: The Controversy of Autonomous Weapon Systems. Copenhagen: Royal Danish Defence College. Retrieved from https://pure.fak.dk/files/7137147/A_Study_of_Military_Technopolitics_NET.pdf
Oie, K. S., & McDowell, K. (2015). Neurocognitive Engineering for Systems’ Development. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 33–50). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-5
Sand, I. J. (2012). Hybridization, Change and the Expansion of Law. In N. Å. Andersen & I. J. Sand (Eds.), Hybrid Forms of Governance: Self-suspension of Power (pp. 186–204). Basingstoke: Palgrave Macmillan. DOI: https://doi.org/10.1057/9780230363007_11
Schmorrow, D. D., & Kruse, A. A. (2004). Augmented Cognition. In W. S. Bainbridge (Ed.), Berkshire Encyclopedia of Human-Computer Interaction (pp. 54–59). Great Barrington, Massachusetts: Berkshire Publishing Group.
Sharkey, N. (2008). Cassandra or False Prophet of Doom: AI Robots and War. IEEE Intelligent Systems, 23(4), 14–17. DOI: https://doi.org/10.1109/MIS.2008.60
Skinner, A., Russo, C., Baraniecki, L., & Maloof, M. (2014). Ubiquitous Augmented Cognition. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Foundations of Augmented Cognition: Advancing Human Performance and Decision-Making through Adaptive Systems (pp. 67–77). Springer International Publishing. DOI: https://doi.org/10.1007/978-3-319-07527-3_7
Stanney, K. M., Hale, K. S., Fuchs, S., Carpenter, A. B., & Berka, C. (2015). Neural Systems in Intelligence and Training Applications. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (ed.) (pp. 23–32). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-4
Tonin, M. (2019). Artificial Intelligence: Implications for NATO’s Armed Forces. NATO Science and Technology Committee, Sub-Committee on Technology Trends and Security (STCTTS). Retrieved from https://www.nato-pa.int/download-file?filename=sites/default/files/2019-10/REPORT%20149%20STCTTS%2019%20E%20rev.%201%20fin-%20ARTIFICIAL%20INTELLIGENCE.pdf
van Gerven, M., Farquhar, J., Scaefer, R., Vlek, R., Geuze, J., Nijholt, A., Ramsey, N., Haselager, P., Vuurpijl, L., Gielen, S., & Desain, P. (2009). The brain-computer interface cycle. Journal of Neural Engineering, 6(4), 1–10. DOI: https://doi.org/10.1088/1741-2560/6/4/041001
Verbeek, P. (2009). Moralizing Technology: on the morality of technical artifacts and their design. In D. Kaplan (Ed.), Readings in the Philosophy of Technology (pp. 226–243). Lanham: Rowman and Littlefield
Wurzman, R., & Giordano, J. (2015). ’NEURINT’ and Neuroweapons: Neurotechnologies in National Intelligence and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 79–113). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454