Never before have we been so a
ware of so many conflicts around the world. The accessibility of information about atrocities anywhere—the stories, the videos, the photos, the tweets—can often make it seem as though we live in an exceptionally violent time. But as the newspaper adage goes, “If it bleeds, it leads.” What has changed is not how many conflicts there are, but how visible they’ve become.
If anything, we’re more peaceful than we’ve ever been, with the amount of violence in human societies
declining precipitously in the past several centuries due to developments like strong states (which monopolize violence and institute the rule of law),
commerce (other people become more valuable alive than dead) and expanded international networks (which demystify and humanize the Other). As the psychologist
Steven Pinker explains in The BetterAngelsofOurNature, his excellent and comprehensive survey of this trend, historical exogenous forces like these “favor our peaceable motives” like
moral sense, reason and
self-control, which “
orient [us] away from violence and toward cooperation and altruism.” Once conscious of this shift, Pinker remarks, “
The world begins to look different. The past seems less innocent; the present less sinister.”
Surely “connectivity” would belong in Pinker’s list of forces had he written his book fifty years hence, because the new level of visibility that perpetrators of violence face in a connected world and all that it portends will greatly weaken any incentives for violent action and alter the calculus of political will to commit crimes as well as stop them.
Nevertheless, conflict, wars, violent border skirmishes and mass atrocities will remain a part of human society for generations to come even as they change form in accordance with the technological age. Here we explore the ways in which different elements of conflict—the buildup of
discrimination and persecution, combat and intervention—will change in the coming decades in response to these new possibilities and penalties.
The origins of violent conflicts are far too complex to have a single root cause. But one well-understood trigger that will change substantially in the new digital age is the systematic discrimination or persecution of minority groups, during which targeted communities become the victims of grave violence or themselves become perpetrators of retaliatory acts. We believe that, in the future, massacres on a genocidal scale will be harder to conduct, but discrimination will likely worsen and become more personal. Increased connectivity within societies will provide practitioners of discrimination, whether they are official groups or ones led by citizens, with entirely new ways to marginalize minorities and other disliked communities, whose own use of technology will make them easier to target.
Governments that are used to repressing minorities in the physical world have a whole new set of options in the virtual world, and those that figure out how to combine their policies in both worlds will be that much more effective at repression. Should the government of a connected country wish to harass a particular minority community in the future, it will find a number of tactics immediately available. The most basic would be to simply erase content about that group from the country’s
Internet. States with strong filtering systems will find this easy, since the
ISPs could just be required to block all sites containing certain keywords, and to shut down sites with prohibited content. To scrub the lingering references to the group on sites like
YouTube, the state could adopt an approach similar to China’s policy of active
censorship, where censors automatically shut down the connection whenever a prohibited word is sighted.
The Chinese government might well target the
Uighur minority in western China. Concentrated in the restive Xinjiang region, this mostly Muslim Turkic ethnic group has long seen tensions flare with the majority
Han Chinese, and separatist movements in Xinjiang have been responsible for a series of failed uprisings in the past several years. Though small, the Uighur population has caused countless headaches in Beijing, and it’s no stretch of the imagination to think that the government could move from censoring Uighur-related episodes (like the 2009
Ürümqi riots) to eliminating all Uighur content online.
States might view this kind of action as a political imperative, an effort to mitigate the internal threats to stability by simply erasing them. Information about the groups would remain available outside a country’s
Internet space, of course, but internally it would vanish. This would be intended both to humiliate the group by negating its very existence and also to isolate it further from the rest of the population. The state could persecute the group with greater impunity, and in time, if the censorship was thorough enough, future generations of majority groups could grow up with barely any awareness of the minority group or the issues associated with it. Erasing content is a quiet maneuver, difficult to quantify and unlikely to set off alarm bells, because such efforts would have small tangible impact while remaining symbolically and psychologically disparaging to the groups most affected. And even if a government were to get “caught” somehow, and shown to be deliberately blocking minority-specific content, officials would probably justify their actions on security grounds or blame them on computer glitches or infrastructure failures.
If a government wanted to go further than content control, and escalate its discriminatory policies to full-blown persecution online, it could find ways to limit a given group’s access to the Internet and its services. This might sound trivial in comparison with the physical harassment, random
arrests, acts of
violence, and economic and political strangulation that persecuted groups around the world experience today. But as connectivity spreads, Internet service and mobile devices offer vital outlets for individuals to transcend their current environment, connecting them with information, jobs, resources, entertainment and other people. Excluding oppressed populations from participating in the virtual world would be a very drastic and damaging policy, because in important ways they’d be left out and left behind, unable to tap into any of the opportunities for growth and prosperity that we see connectivity bringing elsewhere. As banking, salaries and payment transactions move increasingly onto online platforms, exclusion from the
Internet will severely curtail people’s economic prospects. It would be far more difficult to access one’s money, to pay by credit card or get a loan.
Romanian government deliberately excludes some 2.2 million ethnic Roma from the same opportunities as the rest of the population, a policy manifested in separate education systems, economic exclusion in the form of hiring discrimination and unequal access to health and medical benefits (not to mention a heavy social stigma). Current statistics on the Roma’s level of access to technology are hard to come by—many Roma fail to register themselves as such on government surveys for fear of persecution—but as we’ve made clear, connected Roma will find ways to improve their circumstances. The Roma might even consider pursuing virtual statehood of some kind in the future.
But if the Romanian government decided to extend its policies toward the Roma into the online world, nearly all of those opportunities would evaporate. Technological exclusion could take many forms, depending on how much control the state has and how much pain it wants to cause. If it required all citizens to register their devices and IP addresses (many governments already require mobile devices to be registered) or maintained a “hidden people” registry, Romanian authorities using that data would find it easy to block the Roma’s access to news, outside information and platforms with economic or social value. These users would suddenly find themselves unable to reliably access their own personal data or their online banking services; they would confront error messages or seem to have egregiously slow connection speeds. Using its power over the country’s telecommunications infrastructure, the government could instigate dropped calls, jam phone signals in certain neighborhoods or occasionally short-circuit the Roma’s connections to the Internet. Perhaps the government, working with private-sector distributors, could engineer the sale of defective devices to Romany individuals (selling to them through compromised trusted intermediaries), distributing laptops and mobile phones riddled with bugs and back doors that would allow the state to input malicious code at a later date.
Rather than a systematic campaign to cut access (which would incur unwelcome scrutiny), the Romanian government would need only to implement these blockages randomly, frequently enough to harass the group itself but intermittently enough to allow for plausible denials. The Roma, of course, could find imperfect technological work-arounds that enabled basic connectivity, but ultimately the blockages would be sufficiently disruptive that even intermittent access couldn’t replace what was lost. Over a long enough period, a dynamic like this might settle into a kind of virtual apartheid, with multiple sets of limitations on connectivity for different groups within society.
minority groups will become increasingly prevalent in the future because states have the will to do so, and they have access to the data that enables it. Such initiatives might even start as benign programs with public support, then over time transform into more restrictive and punitive policies as power shifts in the country. Imagine, for example, if the ultra-Orthodox contingent in
Israel lobbied for the creation of a white-listed “kosher
Internet,” where only preapproved websites were allowed, and their bid was successful—after all, the thinking might be, creating a special Internet lane for them is not unlike forming a special “safe” list of Internet sites for children.
Years later, if the ultra-Orthodox swept the elections and took control of the government, their first decision might be to make all of Israel’s Internet “kosher.” From that position, they would have the opportunity to restrict access even further for minority groups within Israel.
The most worrisome result of such policies is how vulnerable these restrictions would make targeted groups, whose lifelines could literally be cut. If limited access were to be a precursor to physical harassment or state violence by compromising a group’s ability to send out alert signals, it would also strip victims of their ability to document the abuse or destruction afterward. Soon it may be possible to say that what happens in a digital vacuum, in effect, doesn’t happen.
In countries where governments are targeting minority or repressed groups in this way, an implicit or explicit arrangement between some citizens and states will emerge, whereby people trade information or obedience in exchange for better access. Where noticeable cooperation with the government is demonstrated, the state will grant those individuals faster connections, better devices, protection from online harassment or a broader range of accessible Internet sites. An artist and father of six living in
Shiite minority community may have no desire to become an informant or sign a government pledge to stay out of political affairs, but if he calculates that that cooperation means a more reliable income for himself or better educational opportunities for his children, his resolve might well weaken. The strategy of co-opting potentially restive minority groups by playing to their incentives is as old as the modern state itself; this particular incarnation is merely suited for our digital age.
Neither of these tactics—erasing content and limiting access—is the purview of states alone. Technically capable groups and individuals can pursue
virtual discrimination independently of the government. The world’s first virtual genocide might be carried out not by a government but by a band of fanatics. Earlier, we discussed how extremist organizations will venture into destructive online activities as they develop or acquire technological skills, and it follows that some of those activities will echo the harassment described above. This goes for lone-wolf zealots, too. It’s not hard to imagine that a rabidly anti-Muslim activist with strong technical skills might go after his local Muslim community’s websites, platforms and media outlets to harass them. This is the virtual equivalent of defacing their property, breaking into their businesses and shouting at them from street corners. If the perpetrator is exceptionally skilled, he will find ways to limit the Muslims’ access by targeting certain routers to shut them down, sending out jamming signals in their neighborhoods or building computer viruses that disable their connections.
In fact, virtual discrimination will suit some extremists better than their current options, as a former neo-Nazi leader and current anti-hate activist named
Christian Picciolini told us. “
Online intimidation by hate groups or extremists is more easily perpetrated because the web dehumanizes the interaction and provides a layer of anonymity and ‘virtual’ disconnection,” he explained. “Having the Internet as an impersonal buffer makes it easier for the intimidator to say certain harmful things that might not normally be said face-to-face for fear of peer judgment or persecution.
Racist rhetoric rightfully carries a certain social stigma against the general population, but online, words can be said without being connected to the one saying [them].” Picciolini expects virtual harassment by hate groups to increase significantly in the coming years, since “the consequences of online discrimination seem less audacious to the offender, and therefore [harassment will] happen more frequently and to a more vehement degree.”
In the past, physical and legal exclusion was the dominant tactic used by the powerful in conflict-prone societies, and we believe that virtual exclusion will come to join (but not surpass) that tool kit. When the conditions become unbearable, as throughout history, the sparks of conflict will ignite.
Misinformation and propaganda have always been central features of human conflict.
Julius Caesar filled his famous account of the
Gallic Wars (58 B.C.–50 B.C.) with titillating reports of the vicious barbarian tribes he’d fought. In the fog of competing narratives, determining the “good” and “bad” actors in a conflict is a critical yet often difficult task, and it will become even more challenging in the new digital age. In the future, marketing wars between groups will become a defining feature of conflict, because all sides will have access to electronic platforms, tools and devices that enhance their storytelling abilities for audiences at home and abroad. We saw this unfold during the November 2012 conflict between
Hamas, when the terrorist organization launched a grassroots marketing war that flooded the virtual world with graphic photos of dead women and children. Hamas, which thrives on a public that is humiliated and demoralized, was able to exploit the larger number of casualties in
Gaza. Israel, which focuses more on managing national morale and reducing ambiguity around its actions, countered by utilizing an @IDFSpokesperson
Twitter handle, which included tweets like “
Video: IDF pilots wait for area to be clear of civilians before striking target youtube.com/watch?v=G6a112wRmBs
… #Gaza.” But the reality of marketing wars are that the side which is happy to glorify death and use it for propaganda will often gain wider-spread sympathy, especially as a larger and less-informed audience joins the conversation. Hamas’s propaganda tactics were not new, but the growing ubiquity of platforms such as
Twitter made it possible for them to reach a much larger and non-Arabic-speaking audience in the West, who with each tweet, like and plus-one amplified Hamas’s marketing war.
Groups in conflict will try to destroy each other’s
digital marketing capabilities before a conflict even starts. Few conflicts are clearly black-and-white at the end—let alone when they start—and this near-equivalency in communications power will greatly affect how civilians, leaders, militaries and the media deal with conflict. What’s more, the very fact that anyone can produce and share his or her version of events will actually nullify many claims; with so many conflicting accounts and without credible verification, all claims become devalued. In war, data management (compiling, indexing, ranking and verifying the content emanating from a conflict zone) will shortly succeed access to technology as the predominant challenge.
Modern communication technologies enable both the victims and the aggressors in a given conflict to cast doubt on the narrative of the other side more persuasively than with any media in history. For states, the quality of their marketing might be all that lies between staying in power and facing a foreign intervention. For civilians trapped in a town under siege by government forces, powerful amateur videos and real-time satellite mapping can counter the claims of the state and strongly suggest, if not prove, that it is lying. Yet in a situation like the 2011 violence in
Côte d’Ivoire (where two sides became locked in a violent battle over contested electoral results), if both parties have equally good digital marketing, it becomes much harder to discern what is really happening. And if neither side is fully in control of its marketing (that is, if impassioned individuals outside the central command produce their own content), the level of obfuscation rises even more.
For outsiders looking in, already difficult questions, like who to speak with to understand a conflict, who to support in a conflict and how best to support them, become considerably more complicated in an age of marketing wars. (This is particularly true when not many outsiders speak the local language, or in the absence of standing alliances, like between
NATO countries or the SADC countries, the
Southern African Development Community.) Critical information needed to make those decisions will be buried beneath volumes of biased and conflicting content emanating from the conflict zone. States rarely intervene militarily unless it is very clear what is taking place, and even then they often hesitate for fear of the unforeseen physical consequences and the scrutiny in the twenty-four-hour news cycle.
wars within a conflict abroad will have domestic political implications, too. If the bulk of the American public, swayed by one side’s emotionally charged videos, concludes that intervention in a given conflict is a moral necessity, but the U.S. government’s intelligence suggests that those videos aren’t reflective of the real dynamics in the conflict, how should the administration respond? It can’t release classified material to justify its position, but neither can it effectively counter the narrative embraced by the public. If both sides present equally persuasive versions, outside actors become frozen in place, unable to take a step in any direction—which might be the exact goal of one of the parties in the conflict.
In societies susceptible to ethnic or sectarian violence, marketing wars will typically begin long before there is a spark that ignites any actual fighting. Connectivity and virtual space, as we’ve shown, can often amplify historical and manufactured grievances, strengthening the dissonant perspectives instead of smoothing over their inaccuracies. Sectarian tensions that have lain somewhat dormant for years might reignite once people have access to an anonymous online space. We’ve seen how religious sensitivities can become inflamed almost instantaneously when controversial speech or images reach the Internet—the
Danish cartoon controversy in 2005 and violent demonstrations over the
InnocenceofMuslims video in 2012 are just a couple of many prominent examples—and it’s inevitable that online space will create more ways for people to offend one another. The viral nature of incendiary content will not allow an offensive act in any part of the world to go unnoticed.
Marketing is not the same thing as intelligence, of course. Early attempts at digital marketing by groups in conflict will be little more than crude propaganda and misinformation transferred to a virtual platform. But over time, as these behaviors are adopted around the world by states and individuals, the aesthetic distance between intelligence and marketing will close. States will have to be careful not to mistake one for the other. Once groups are wise to what they need to produce in order to generate a specific response, they will be able to tailor their content and messaging accordingly.
Those with state resources will have the upper hand in any marketing war, but never the exclusive advantage. Even if the state controls many of the means of production—the cell towers, the state media, the ISPs—it will be impossible for any party to have a complete information monopoly. When all it takes to shoot, edit, upload and disseminate user-generated content is a palm-sized phone, a regime can’t totally dominate. One video captured by a shaky mobile-phone camera during the postelection protests in Iran in 2009 galvanized the opposition movement: the famous “
Neda Agha-Soltan was a young woman living in Tehran who while parked on a quiet side of the street at an antigovernment protest stepped out of her car to escape the heat and was shot in the heart by a government sniper from a nearby rooftop. Amazingly, the entire incident was caught on someone’s mobile phone. While members of the crowd attempted to revive Neda, others began filming her on their phones as well. The videos were passed between Iranians, mostly through the peer-to-peer platform
Bluetooth, since the regime had blocked mobile communications in anticipation of the protests; they found their way online and went viral. Around the world, observers were galvanized to speak out against the Iranian regime while protesters in Iran marched, calling for justice for Neda. All of this significantly ratcheted up the global attention paid to a protest movement the regime was desperately trying to stop.
Even in the most restrictive societies, places where
spyware and virtual
harassment and pre-compromised mobile phones are rampant, some determined individuals will find a way to get their messages out. It might involve smuggling
SIM cards, rigging mesh networks (essentially, a wireless collective in which all devices act like routers, creating multiple nodes for data traffic instead of a central hub) or distributing “invisible” phones that are designed to record no communications (perhaps by allowing all calls to be voice over
IP) and that allow anonymous use of Internet services. All state efforts to curtail the spread of an in-demand technology fail; it’s merely a question of when. (This is true even for the persecuted minorities whose government tries to exclude them from the Internet.) Long before the
Neda video, Iran tried to ban satellite-television dishes; their mandate was met with an increase in satellite adoption among the Iranian public. Today, the illegal satellite market in Iran is among the largest per capita in the world and even some members of the regime profit from black market sales.
The 1994 Rwandan
genocide, a high-profile conflict from the pre-digital age that claimed the lives of 800,000 people, demonstrates what a difference proportionate marketing power makes. In 1994, while
Twa all owned radios, only Hutus owned radio stations. With no means of amplifying their voices, Tutsis were powerless against the barrage of propaganda and hate speech building on the airwaves. When Tutsis tried to operate their own radio station, the Hutu-dominated government identified these operators, raided their offices and made arrests. If the minority Tutsi population in the years leading up to the 1994 genocide had had the powerful mobile devices we have today, perhaps a narrative of doubt could have been injected into Rwandan public discourse, so that some ordinary Hutu civilians would not have found the anti-Tutsi propaganda sufficiently compelling to lead them to take up arms against their fellow Rwandans. The Tutsis would have been able to broadcast their own content from handsets, while on the move, without having to rely on government approval or intermediaries to develop and disseminate content. During the genocide, the Hutu radio stations announced names and addresses of people who were hiding—one can only imagine what a difference an alternative communications channel, like encrypted peer-to-peer messaging, might have made.
Despite potential gains, there will be longer-term consequences to this new level playing field, though we cannot predict what will be lost when traditional barriers are removed.
Misinformation, as mentioned above, will distract and distort, leading all actors to misinterpret events or miscalculate their response. Not every brutal crime committed is part of a systematic slaughter of an ethnic or religious group, yet it can be incorrectly cast as such with minimal effort. Even in domestic settings, misinformation can present a major problem: How should a local government handle an angry mob at city hall demanding justice for a manipulated video? Governments and authorities will face questions like these repeatedly, and only some of the answers they give will be pacifying.
The best and perhaps only reply to these challenges is
digital verification. Proving that a photo has been doctored (by checking the
digital watermark), or that a video has been selectively edited (by crowd-sourcing the whole clip to prove parts are missing), or that a person shown to be dead is in fact alive (by tracking his online identity) will bring some veracity in a hyper-connected conflict. In the future, a witness to a militia attack in
South Sudan will be able to add things like digital watermarks,
biometric data and
satellite positioning coordinates to add heft to his claims, useful for when he shares the content with police or the media. Digital verification is the next obvious stage of the process. It already occurs when journalists and government officials cross-check their sources with other forms of information. It will be even easier and more reliable when computers do most of the work.
Teams of international verification monitors could be created, dispatched to conflicts where there is a significant dispute about the digital narratives emerging. Like the Red Cross, verification monitors would be seen as neutral agents, in this case highly capable ones technically.
(They need not be deployed to the actual conflict zone in every case—their work could sometimes be done over an Internet connection. But in conflicts where communications infrastructure is limited or overwhelmingly controlled by one side, proximity to the actors would be necessary, as would language skills and cultural knowledge.) Their stamp of approval would be a valuable commodity, a green light for media and other observers to take given content seriously. A state or warring party could bypass these monitors, but doing so would devalue whatever content was produced and make it highly suspect to others.
The monitors would examine the data, not the deed, so their conclusions would be weighted heavily, and states might launch interventions, send aid or issue sanctions based on what they say. And, of course, with such trust and responsibility comes the inevitable capacity for abuse, since these monitors would be no less immune to the corruption that stymies other international organizations. Regimes might attempt to co-opt verification monitors, through bribes or blackmail, and some monitors might harbor personal biases that reveal themselves too late. Regardless, the bulk of these monitors would be comprised of honest engineers and journalists working together, and their presence in a conflict would lead to more safety and transparency for all parties.
When not engaged in marketing wars, groups in conflict will attack whatever online entities they deem valuable to the other side. This means targeting the websites, platforms and communications infrastructure that have some strategic or symbolic importance with distributed denial of service
(DDoS) attacks, sophisticated viruses and all other types of cyber warfare. Online attacks will become an integral part of the tactical strategy for groups in conflict, from the lowest intensity fight to full-fledged warfare. Attacking or incapacitating a rival group’s communications network will not only interfere with its digital marketing abilities but will also affect its access to resources, information and its support base. Once a network or database has been successfully compromised, the infiltrating group can use the information they gathered to stay informed, spread misinformation, launch preemptive attacks and even track high-value targets (if, for example, a group found the mobile number for regime officials and had monitoring software that revealed their locations).
Virtual attacks will happen independently and in retaliation. In a civil war, for example, if one side loses territory to the other, it might retaliate by bringing down its rival’s propaganda websites so as to limit its ability to brag about the victory—not an equivalent gain, of course, but damaging nonetheless. This is the virtual-world version of bombing the ministry of information, often one of the first targets in a physical-world conflict. A repressive government will be able to locate and disable the online financial portals that revolutionaries in the country are using to receive funds from supporters in the diaspora.
Hackers sympathetic to one side or the other will take it upon themselves to dismantle whatever they can reach: YouTube channels run by their adversaries, databases relevant to the other side. When
NATO began its military operations in
Serbia in 1999, pro-Serbian hackers targeted public websites for both NATO and the
U.S. Defense Department, with some success. (NATO’s public-affairs website for
Kosovo was “
virtually inoperable” for days as a result of the attacks, which also seriously clogged the organization’s e-mail server.)
In the coming decades, we’ll see the world’s first “smart” rebel movement. Certainly, they’ll need guns and manpower to challenge the government, but rebels will be armed with technologies and priorities that dictate a new approach. Before even announcing their campaign, they could target the government’s communications network, knowing it constitutes the real (if not official) backbone of the state’s defense. They might covertly reach out to sympathetic governments to acquire the necessary technical components—
biometric information—to disable it, from within or without. A digital strike against the communications infrastructure would catch the government off guard, and as long as the rebels didn’t “sign” their attack, the government would be left wondering where it came from and who was behind it. The rebels might leave false clues as to the origin, perhaps pointing to one of the state’s external enemies, to confuse things further. As the state worked to get itself back online, the rebels might strike again, this time infiltrating the government’s Internet and “spoofing” identities (tricking the network into believing the infiltrators are legitimate users) to further disorient and disrupt the network processes. (If the rebels gained access to an important biometric database, they could steal the identities of government officials and impersonate them online, making false statements or suspicious purchases.) Finally, the rebels could target something tangible, like the country’s
power grids, the manipulation of which would generate public outcry and blame, incorrectly aimed at the government. Thus the smart rebel movement could, with three digital strikes and no shots fired, find itself uniquely poised to mobilize the masses against a government that wasn’t even aware of a domestic rebellion. At this point, the rebels could begin their military assault and open a second, physical front.
Conflicts in the future will also be influenced by two distinct and largely positive trends that stem from connectivity: first, the
wisdom of the online crowd, and second, the
permanence of data as evidence, which we alluded to earlier as making it harder for perpetrators of violence to deny or minimize their crimes.
Collective wisdom on the Internet is a controversial subject. Many decry the negative extremes of online collaboration, such as the aggressive mediocrity of the “
hive mind” (the collective consensus of groups of online users) and the viciousness of anonymity-fueled pack behavior on forums, social networks and other online channels. Others champion the level of accuracy and reliability of crowd-sourced information platforms like
Wikipedia. Whatever your view, there are potential gains that collective wisdom can bring to future conflict.
With a more level playing field for information in a conflict, a greater number of citizens can participate in shaping the narratives that emerge. Widespread mobile-phone usage will ensure that more people know what’s going on inside a country than did in earlier times, and Internet connectivity extends that sphere of engagement to a broad range of outside actors. On balance, there are always more people on the side of good than on the side of the aggressors. With an engaged population, there is greater potential for citizen mobilization against injustice or propaganda: If enough people are angry with what they see, they’ll have channels through which they can make their voices heard, and can act individually or collectively—even if, as we saw in Singapore, the anger is over the cooking of curry.
The challenges of governing the Internet also allow for the danger of online
vigilantism, as the story of China’s
“human-flesh search engines” (renrousousuoyinqing) shows. According to
Tom Downey’s revealing March 2010 article in TheNewYorkTimesMagazine, some years ago a disturbing trend emerged in China’s online space, where volumes of Internet users would locate, track down and harass individuals who had earned their collective wrath. (There is no central platform for this work, nor is the trend limited to China, but the phenomenon is most widely known and understood there, thanks to a series of high-profile examples.) In 2006, a gruesome video circulated on Chinese Internet forums depicting a woman stomping a kitten to death with her high-heeled shoes, leading to a countrywide search for the stomper. Through diligent crowd-sourced detective work, the perpetrator was soon tracked to a small town in northeastern China, and after her name, phone number and employer were made public, she fled, as did her cameraman. It’s not just computers that can find needles in haystacks, apparently; locating this woman among more than one billion Chinese—through only the clues in the video—
took just six days.
This kind of mob behavior can veer into unpredictable chaos, but that does not mean attempts to harness its collective power for good should be abandoned. Imagine if the end goal of the Chinese users was not to harass the kitten-stomper but to bring her to justice through official channels. In a conflict scenario, where institutions have broken down or are not trusted by the population, crowd-sourced energy will help to produce more comprehensive and accurate information, help track down wanted criminals and create demand for
accountability even in the most difficult circumstances.
But the importance and utility of crowd-sourced justice pales in comparison to the other modern development: data permanence. The exposure of atrocities in real time and in front of a global audience is vital, as is permanently storing it and making it searchable for everyone who wants to refer to it (for prosecutions, legislation or later study). Governments and other aggressors may have the military advantage with guns, tanks and planes, but they’ll be fighting an uphill battle against the information trail they leave behind. If a government attempts to block citizen communications, it may be able to stifle some of the evidence flowing through and out of the country, but the flow will continue. More important, the presence of this evidence, even if disputed at the time, will affect how the conflict is handled, resolved and considered well into the future.
Accountability, or the threat of it, is a powerful idea; that’s why people try to destroy evidence. In the absence of hard data, conflicting narratives can impede justice and closure, and this applies to citizens and states alike. In January 2012,
Turkey became embroiled in a diplomatic row when the French Senate passed a bill (struck down one month later by the French Constitutional Council) that made it illegal to deny that the mass killing of
Armenians by the
Ottoman Empire in 1915 was genocide. The Turkish government, which rejects the term “genocide” and claims that far fewer than 1.5 million Armenians were killed, called the bill “
racist and discriminatory” and said judgment of the killings should be left to historians. With the technological devices, platforms and databases we have today, it will be much more difficult for governments in the future to argue over claims like these, not just because of permanent evidence but because everyone else will have access to the same source material.
In the future, tools like
biometric data matching,
SIM-card tracking and easy-to-use content-generating platforms will facilitate a level of accountability never before seen. A witness to a crime will be able to use his phone to capture what he sees and identify the perpetrator and victim with
facial-recognition software almost instantly, without having to be directly in harm’s way. Information about crimes or brutality in digital form will be automatically uploaded to the cloud (thus no data loss if the witness’s phone is confiscated) and perhaps sent to an international monitoring or judicial body. An international court could then investigate, and depending on what it found, begin a public virtual trial and broadcast the proceedings back into the country where the perpetrator was roaming free. The risk of public shame and criminal charges might not deter leaders, but it would be enough to make some foot soldiers think twice before engaging in more violent activities. Professionally verified evidence would be available at The
Hague’s website before the trial, and witnesses would be able to testify virtually and in safety.
Of course, the wheels of justice turn slowly, particularly in the labyrinthine environment of international law. While a system of data responsiveness develops, the intermediate gains will be the storage of verifiable evidence, and better law enforcement will result. An open-source app, created by the
International Criminal Court or some other body, could feature the world’s most wanted criminals broken down by country. Just as the Chinese
human-flesh search engines can pinpoint an individual’s location and contact details, the same capability can be turned toward hunting down criminals. (Remember: People will have powerful phones in even the most remote places.) Using the same platform, concerned citizens around the world could contribute financially toward a reward as an incentive for making an arrest. Then, instead of facing mob justice, the criminal would be taken into custody by police and put on trial.
The collective power of the online world will serve as a tremendous deterrent to potential perpetrators of brutality, corrupt practices and even crimes against humanity. To be sure, there will always be truly malevolent types for whom deterrence will not work, but for merely dishonorable individuals, the potential costs of bad behavior in a digital age will become only greater. Beyond the heightened risks of accountability and the increased liklihood of a crime being documented and preserved in perpetuity, whistle-blowers will use technology to reach the widest possible audience. Defectors will have a far greater incentive to avoid accusations of complicity in these documented crimes as well. Perhaps a digital witness-protection program will be built to provide informants with new virtual identities (like the ones sold on the black market mentioned earlier) to offer further incentives for their participation.
Permanent digital evidence will also help shape transitional justice after a conflict has ended. Truth-and-reconciliation committees in the future will feature a trove of digital records, satellite surveillance, amateur videos and photos, autopsy reports and testimonials. (We’ll explore this topic shortly.) Again, the fear of being held accountable will be a sufficient deterrent for some would-be aggressors; at the very least they might dial back the level of violence.
Beyond documenting atrocities,
cloud storage will make
data permanence relevant and important to people in conflict. Personal data not in the physical world will be safer, as it will be unreachable. Sometimes the outbreak of violence catches everyone by surprise. But in instances where the security situation is visibly declining, individuals will anticipate and prepare for the possibility of fleeing or being displaced. Individuals will also be able to sustain their claims to their homes, property and businesses even in exile or as refugees by capturing visual evidence and using tools like
Google Maps and
GPS to mark boundaries. They’ll be able to move their land titles and deeds to the cloud. Where there are disputes, the digital platforms will assist in arbitration. Civilians caught up in conflict and forced to flee could take pictures of all of their possessions and re-create a model of their home in virtual space. If they return, they’ll know exactly what is missing and may well be able to use a social-networking platform to locate the stolen items (after they’ve digitally verified that they own them).
When conflict escalates into armed combat, future participants will find the landscape of war to be nothing like it has been in the past. The opening of a virtual front to warfare will not change the fact that the most sophisticated automated weapons and soldiers must still operate in the physical world, never eliminating the essential role that human guidance and judgment play. But militaries that do not take into account this dual-world phenomenon (and their responsibilities in both) will find that, while new technology makes them far more efficient killing machines, they are hated and reviled as a result, making the problem of winning hearts and minds that much more difficult.
The modern automation of warfare, through developments in robotics, artificial intelligence and
unmanned aerial vehicles (UAVs), constitutes the most significant shift in human combat since the invention of the gun. It is, as the military scholar
Peter Singer notes in his masterly account of this trend,
WiredforWar: The Robotics Revolution and Conflict in the21st Century, what scientists would call a “
singularity”—a “state in which things become so radically different that the old rules break down and we know virtually nothing.” Much as with other paradigm shifts in history (germ theory, the invention of the printing press, Einstein’s theory of relativity), it is almost impossible to predict with any great accuracy how the eventual change to fully automated warfare will alter the course of human society. All we can do is consider the clues we see today, convey the thinking of people on the front lines, and make some educated guesses.
Integrating information technology into the mechanics of warfare is not a new trend: DARPA, the Pentagon’s research-and-development arm, was created in 1958 as a response to the launch of
The government’s determination to avoid being caught off guard again was such that DARPA’s mission is, quite literally, “to maintain the technological superiority of the U.S. military and prevent technological surprise from harming our national security.” Subsequently, the United States has led the world in sophisticated military technology, in everything from smart bombs to unmanned drones and bomb-defusing
explosive-ordnance-disposal (EOD) robots. But, as we’ll discuss below, the United States may not hold that exclusive advantage for very long.
It’s easy to understand why governments and militaries like robots and other unmanned systems for combat: They never tire, they never feel fear or emotion, they have superhuman capabilities and they always follow orders. As
Singer points out, robots are uniquely suited to the roles that the military refers to as the three Ds (jobs that are dull, dirty or dangerous). The tactical advantages conferred by robots are constrained only by the limits of robotics manufacturers. They can build robots that withstand bullets, have perfect aim, recognize and disarm targets, and carry impossible loads in severe conditions of heat, cold or disorientation. Military robots have better endurance and faster reaction time than any soldier, and politicians will much more readily send them into battle than human troops. Most people agree that the introduction of robots into combat operations, whether on the ground, at sea or in the air, will ultimately produce fewer combat deaths, fewer civilian casualties and less collateral damage.
Already there are many forms of robots at work in American military operations. More than a decade ago, in 2002, iRobot, the company that invented the
Roomba robotic vacuum cleaner, introduced a ground robot called the
PackBot, a forty-two-pound machine with treads like a tank’s, cameras and a degree of autonomous functionality, that military units can equip to detect mines, sense chemical or biological weapons and investigate potential
IEDs (improvised explosive devices) along the sides of roads or anywhere else.
Another robotics manufacturer,
Foster-Miller, makes a PackBot competitor called the
TALON, as well as the first armed robot brought to battle: the
Special Weapons Observation Reconnaissance Detection System, or SWORDS. And then there are the aerial
drones. In addition to the now recognizable
Predator drones, the U.S. military operates smaller versions (like the hand-launched Raven drone, used for surveillance) and larger ones (like the
Reaper, which flies higher, faster and with a larger weapons payload than the Predator). An internal congressional report acquired by
Danger Room blog in 2012 stated that drones now account for 31 percent of all military aircraft—up from 5 percent in 2005.
We spoke to a number of former and current Special Forces soldiers to gauge how they believed this progression of robotic technologies will affect combat operations in the next decades.
Harry Wingo, a Googler and former Navy SEAL, spoke to the usefulness of using computers and “bots” instead of humans for surveillance, and of
robots “taking point” in advancing through a field of fire or when clearing a building. In the next decade, he said, more “
lethal kinetics”—operations involving fire—“will be handed over to bots, including those like room-clearing that require split-second parsing of targets.” Initially, the robots will be operated with “machine-assist,” meaning a soldier will direct the machine from a remote location, but eventually, Wingo believes, “the bots will identify and engage targets.” Since 2007, the U.S. military has deployed armed
SWORDS robots that can semi-autonomously recognize and shoot human targets, though it is believed that they have not, as yet, been used in a lethal context.
Soldiers will not be left behind completely, and not all human functions will be automated. None of the robots in operation today operate fully autonomously—which is to say, without any human direction—and, as we’ll discuss later, there are important aspects of combat, like judgment, that robots will not be capable of exercising for many years to come. To better understand how technology will soon enhance the capabilities of human soldiers we asked a now inactive Navy SEAL, who, incidentally, participated in the
Osama bin Laden raid in May 2011, what he anticipated for combat units in the future. First, he told us, he envisioned units equipped with highly sophisticated and secure tablet devices that will allow soldiers to tap into live video feeds from
UAVs, download relevant intelligence analysis and maintain situational awareness of friendly troop movements. These devices will have unique live maps loaded with enough data about the surrounding environment—the historical significance of a street or building, the owners of every home, and the interior infrared heat movements captured by drones overhead—to provide soldiers with a much clearer sense of what to target and what to avoid.
Second, the clothes and gear that soldiers wear will change. Haptic technologies—this refers to touch and feeling—will produce uniforms that allow soldiers to communicate through pulses, sending out signals to one another that result in a light pinch or vibration in a particular part of the body. (For instance, a pinch on the right calf could indicate a helicopter is inbound.) Helmets will have better visibility and built-in communications, allowing commanders to see what the soldiers see and “backseat drive,” directing the soldiers remotely from the base. Camouflage will allow soldiers to change their uniform’s color, texture, pattern or scent. Uniforms might even be able to emit sounds to drown out noises soldiers want to hide—sounds of nature masking footsteps, for example. Lightweight and durable power sources will be integrated as well, so that none of the devices or wearable technologies will fail at crucial moments due to heat, water or distance from a charger. Soldiers will have the additional ability to destroy all of this technology remotely, so that capture or theft will not yield valuable intelligence secrets.
And, of course, wrapping all of this together will be a hefty layer of cybersecurity—more than any civilian would use—that enables instant data transmission within a cocoon of electronic protection. Without security, none of the advantages above will be worth the considerable cost that will be required to develop and deploy them.
Alas, military contractors’ procedures will hold back many of these developments. In the United States, the military-industrial complex is working on some of the initiatives mentioned above—
DARPA has led the development of many of the robots now in operation—but it is by nature ill-equipped to handle innovation. Even DARPA, while relatively well funded, is predictably stalled by elaborate contracting structures and its position in the Department of
Defense bureaucracy. The innovative edge that is the hallmark of the American technology sector is largely walled off from the country’s military by an anarchic and byzantine acquisitions system, and this represents a serious missed opportunity. Without reforms that allow military agencies and contractors to behave more like small private companies and start-ups (with maneuverability and the option to move quickly) the entire industry is likely to retrench rather than evolve in the face of fiscal austerity.
The military is well aware of the problems. As
Singer told us, “
It’s a big strategic question for them: How do they break out of this broken structure?” Big defense projects languish in the prototype stage, over budget and behind schedule, while today’s commercial technologies and products are conceived of, built and brought to market in volume in record time. For example, the
Joint Tactical Radio System, which was supposed to be the military’s new Internet-like radio-communications network, was conceived of in 1997, then shut down in September 2012, only to have acquisitions functions transferred to the Army under what is now called the
Joint Tactical Networking Center. By the time it was shut down as its own operation, it had cost billions of dollars and was still not fully deployed on the battlefield. “
They just can’t afford that kind of process anymore,”
One recourse for the military and its contractors is to use commercial, off-the-shelf (COTS) products, which means buying commercially available technologies and devices rather than developing everything in-house. The integration of such outside products, however, is not an easy process; meeting military specifications alone (for ruggedness, utilization and security) can introduce damaging delays. According to Singer, the bureaucracy and inefficiency of the military contracting system have actually generated an unprecedented degree of ground-level ingenuity in building functional work-arounds. Some involve buying quick-need systems outside the normal Pentagon acquisitions process; that is how MRAP
(mine-resistant, ambush-protected) vehicles were rushed to the front after the scourge of IEDs began in
Iraq. And troops often adapt commercial technologies that they take on a deployment themselves.
Even military leaders have recognized the advantages that such inventiveness can bring. “
The military was, in some ways, aided by the demands of the battlefields in Iraq and
Afghanistan,” Singer explained. “In Afghanistan, Marine attack helicopter pilots have taken to strapping iPads onto their knees as they fly, and using those for maps instead of the built-in system in their crafts.”
He added that as the pressure of an active battlefield ends, military leaders are worried that innovative work-arounds might evaporate. It remains to be seen if innovation will drive change in a problematic contracting system.
• • •
Technological breakthroughs have offered the United States major strategic advantages in the past. For many years after the first laser-guided missiles were developed, no other country could match their lethality over long distances. But technological advantages generally tend to equalize over time, as technologies are spread, leaked or reverse-engineered, and sophisticated weaponry is no exception. The market for
drones is already international:
Israel has been at the forefront of that technology for years;
China is very active in promoting and selling its drones; and
Iran unveiled its first domestically built drone bomber in 2010. Even
Venezuela has joined the club, utilizing its military alliance with Iran to create an “exclusively defensive” drone program that is operated by Iranian missile engineers. When asked to confirm reports of this program, the Venezuelan president
Hugo Chavez remarked, “
Of course we’re doing it, and we have the right to. We are a free and independent country.” Unmanned drones will get smaller, cheaper and more effective over time. As with most technologies, once a product is released into the environment—be it a drone or a desktop application—it’s impossible to put it back in the box.
We asked the former
Regina Dugan how the United States approaches the high level of responsibility that comes with building such things, knowing that the ultimate consequences are out of its control. “
Most advances in technology, particularly big ones, tend to make people nervous,” she said. “And we have both good and bad examples of developing the societal, ethical and legal framework that goes with those kinds of technological advances.” Dugan pointed to the initial concerns people expressed about human genome sequencing when that breakthrough was announced: If it could be determined that you had a predisposition toward Parkinson’s disease, how would that affect how employers and insurance companies treated you? “What came to pass was the understanding that the advance that would allow you to see that predisposition was not the thing that we should shy away from,” Dugan explained, “but rather we should create the legal protections that ensure that people couldn’t be denied health care because they had a genetic predisposition.” The development of technological advances and the protections they will ultimately require must grow in tandem for the right balance to be struck.
Dugan described her former agency’s role in stark terms: “You can’t undertake a mission like the invention and prevention of strategic
drones engineered for combat purposessurprise if you’re unwilling to do things that initially make people feel uncomfortable.” Rather, the obligation is to handle that job responsibly—which, critically, requires input and help from other people. “The agency can’t do it by itself. One has to involve other branches of government, other parties, in the debate about those things,” she said.
It is comforting to hear how seriously DARPA takes its responsibility for these new technologies, but the problem is, of course, that not all governments will approach them with similar consideration and caution. The proliferation of drones presents a particularly worrisome challenge, given the enormous benefits they bestow upon even the smallest armies. Not every government or military in the world has the technical infrastructure or human capital to support its own fleet of unmanned vehicles; only those with deep pockets will find it easy to buy that capability, openly or otherwise. Owning military
robots—particularly unmanned aerial vehicles—will become a strategic prerogative for every country; some will acquire them to gain an edge, and the rest will acquire them just to maintain their sovereignty.
Underneath this state-level competition, there will be an ongoing race by civilians and other non-state actors to acquire or build drones and robots for their own purposes.
Singer reminded us that “
non-state actors that range from businesses like media groups and agricultural crop dusting to law enforcement, to even criminals and terrorists, have all used drones already.” The controversial private military firm
Blackwater, now called
Academi, LLC, unveiled its own special service—
unmanned drones, available to rent for surveillance and reconnaissance missions—in 2007. In 2009, it was contracted to load bombs onto CIA drones.
There is also plenty of private development and use of drones outside the context of military procurement. For example, some real-estate firms are now using private drones to take aerial photographs of their larger properties. Several universities have their own drones for research purposes;
Kansas State University has established a degree for unmanned aviation. And in 2012 we learned about
Tacocopter (a service allowing anyone craving a taco to order on a smart phone, punch in his location and receive his tacos by drone), which proved to be a hoax, but is both technically possible and not far off.
As we mentioned earlier, lightweight and inexpensive “everyman” drones engineered for combat purposes will become particularly popular at the global arms bazaar and in illicit markets. Remotely piloted model planes, cars and boats that can conduct surveillance, intercept hostile targets and carry and detonate bombs will pose serious challenges for soldiers in war zones, adding a whole other dimension to combat operations. If the civilian version of armed drones becomes sophisticated enough, we could well see military and civilian drones meeting in battle, perhaps in
Mexico, where drug cartels have the will and the resources to acquire such weapons.
Governments will seek to restrict access to the key technologies making drones easy to mass-produce for the general populace, but regulating the proliferation and sale of these everyman drones will be very difficult. An outright ban is simply unrealistic, and even modest attempts to control civilian use in peaceful countries will have limited success. If, for example, the U.S. government required people to register their small unmanned aircrafts, restricted the spaces in which drones could fly (not near airports or high-value targets, for example) and banned their transport across state lines, it’s not hard to imagine determined individuals finding ways around the rules by reconfiguring their devices, anonymizing them or building in some kind of stealth capacity. Still, we might see international treaties around the proliferation of these technologies, perhaps banning the sale of larger drones outside official state channels. Indeed, states with the greatest capacity to proliferate
UAVs may even pursue the modern-day version of the
Strategic Arms Limitation Talks (SALT), which sought to curtail the number of U.S. and Soviet arms during the
States will have to work hard to maintain the security of their shores and borders from the growing threat of enemy UAVs, which, by design, are hard to detect. As autonomous navigation becomes possible, drones will become mini cruise missiles, which, once fired, cannot be stopped by interference. Enemy surveillance drones may be more palatable than drones carrying missiles, but both will be considered a threat since it won’t be easy to tell the two apart. The most effective way to target an enemy drone might not be with brute force but electronically, by breaching the UAV’s cybersecurity defenses. Warfare then becomes, as
Singer put it, a “
battle of persuasion”—a fight to co-opt and persuade these machines to do something other than their mission. In late 2011,
Iran proudly displayed a downed but intact American drone, the
RQ-170 Sentinel, which it claimed to have captured by hacking into its defenses after detecting it in Iranian airspace. (The United States, for its part, would say only that the drone had been “lost.”) An unnamed Iranian engineer told TheChristianScienceMonitor that he and his colleagues were able to make the drone “
land on its own where we wanted it to, without having to crack the remote-control signals and communications” from the U.S. control center because of a known vulnerability in the plane’s
GPS navigation. The technique of implanting new coordinates, known as
spoofing, while not impossible, is incredibly difficult (the Iranians would have had to get past the military’s encryption to reach the GPS, by spoofing the signals and jamming the communications channels).
Diplomatic solutions might involve good-faith treaties between states not to send surveillance drones into each other’s airspace or implicit agreements that surveillance drones are an acceptable offense. It’s hard to say. Perhaps there might emerge international requirements that surveillance drones be easily distinguishable from bomber drones. Some states might join together in a sort of “drone shield,” not unlike the nuclear alliance of the
Cold War, in which case we would see the world’s first drone-based no-fly zone. If a small and poor country cannot afford to build or buy its own bomber drones, yet it fears aerial attacks from an aggressive neighbor, it might seek an alliance with a superpower to guarantee some measure of protection. It seems unlikely, however, that states without drones will remain bereft for long: The Sentinel spy drone held by the Iranians cost only around $6 million to make.
The proliferation of
UAVs will increase conflict around the world—whenever states acquire them, they’ll be eager to test out their new tools—but it will decrease the likelihood of all-out war. There are a few reasons for this. For one, the phenomenon is still too new; the international treaties around weapons and warfare—the
Nuclear Nonproliferation Treaty, the
Anti-Ballistic Missile Treaty, and the
Chemical Weapons Convention, to name a few—have not caught up to the age of drones. Boundaries need to be drawn, legal frameworks need to be developed and politicians must learn how to use these tools responsibly and strategically. There are serious ethical considerations that will be aired in public discourse (as is taking place in the United States currently). These important issues will lead states to exhibit caution in the early years of drone proliferation.
We must also consider the possibility of a problem with loose
drones, similar to what we see with nuclear weapons today. In a country such as
Pakistan, for example, there are real concerns about the state’s capacity to safeguard its nuclear stockpiles (estimated to be a hundred nuclear weapons) from theft. As states develop large fleets of drones, there will be a greater risk that one of these could fall into the wrong hands and be used against a foreign embassy, military base or cultural center. Imagine a future 9/11 committed not by hijackers on commercial airliners, but instead by drones that have fallen out of state hands. These fears are sufficient to spur future treaties focused on establishing requirements for drone protection and safeguarding.
States will have to determine, separately or together, what the rules around
UAVs will be, whether they will be subject to the same rules as regular planes regarding violating sovereign airspace. States’ mutual fears will guard against a rapid escalation of drone warfare. Even when it was revealed that the
American Sentinel drone had violated Iranian airspace, the reaction in Tehran was boasting and display, not retaliation.
The public will react favorably to the reduced lethality of drone warfare, and that will forestall outright war in the future. We already have a few years of drone-related news cycles in America from which to learn. Just months before the 2012 presidential election, government leaks resulted in detailed articles about President
Obama’s secret drone operations. Judging by the reaction to drone strikes in both official combat theaters and unofficial ones like Somalia,
Yemen and Pakistan, lethal missions conducted by drones are far more palatable to the American public than those carried out by troops, generating fewer questions and less outrage. Some of the people who advocate a reduced American footprint overseas even support the expansion of the drone program as a legitimate way to accomplish it.
We do not yet understand the consequences—political, cultural and psychological—of our newfound ability to exploit physical and emotional distance and truly “dehumanize” war to such a degree.
Remote warfare is taking place more than at any other time in history and it is only going to become a more prominent feature of conflict. Historically, remote warfare has been thought of mostly in terms of weapons delivered via missiles, but in the future it will be both commonplace and acceptable to further separate the actor from the scene of battle. Judging from current trends, we can assume that one effect of these changes will be less public involvement on the emotional and political levels. After all, casualties on the other side are rarely the driving factor behind foreign policy or public sentiment; if American troops are not seen to be in harm’s way, the public interest level drops dramatically. This, in turn, means a more muted population on matters of national security; both hawks and doves become quieter with a smaller threat to their own soldiers on the horizon. With more combat options that do not inflame public opinion, the government can pursue its security objectives without having to consider declaring war or committing troops, decreasing the possibility of outright war.
The forecast of fewer civilian casualties, less collateral damage and the reduced risk of human injury are welcome, but the shift toward a more automated battlefield will introduce significant new vulnerabilities and challenges. Chief among them will be maintaining the cybersecurity of equipment and systems. The data flow between devices, ground robots and UAVs, and their human-directed command-and-control centers must be fast, secure and unimpeded by poor infrastructure, just like communications between troop units and their bases. This is why militaries set up their own communications networks instead of relying on the local one. Until robots in the field have autonomous artificial intelligence, an impeded or broken connection turns these machines into expensive dead weight—possibly dangerous, too, since capture of an enemy’s robot is akin to capturing proprietary technology. There is no end to the insights such a capture could yield, particularly if the robot is poorly designed—not only information about software and drone engineering, but even more sensitive data like enemy locations gleaned through digital coordinates. (It’s also hard to imagine that countries won’t purposely crash-land or compromise a decoy UAV, filled with false information and misleading technical components, as part of a misinformation campaign.) In wars where robotic elements are present, both sides will employ
cyber attacks to interrupt enemy activity, whether by spoofing (impersonating a network identity) or employing decoys to disrupt enemy sensor grids and degrade enemy battle networks. Manufacturers will attempt to build in fail-safe mechanisms to limit the damage of these attacks, but it will be difficult to build anything technologically bulletproof.
Militaries and robotics developers will face simple error as well. All networked systems have vulnerabilities and bugs, and often the only way they become known is when they are revealed by hackers or independent security-systems experts. The computer code necessary to operate machines of this caliber is incredibly dense—millions upon millions of lines of code—and mistakes happen. Even when developers are aware of a system’s vulnerabilities, it isn’t easy to address them. The vulnerability the
Iranians said they attacked in bringing down the U.S.
drone, a weakness in the
GPS system, had reportedly been known to the Pentagon since the Bosnian campaign of the 1990s. In 2008, U.S. military troops first discovered laptops from Shiite insurgents in Iraq containing files of intercepted drone video feeds, which the Iraqis had been able to access by simply pointing their satellite dishes up and using a cheap downloadable software,
SkyGrabber, available for $26, that was originally intended to help people pirate movies and music. The data links between the drone and its ground control station were never encrypted.
For the near future, as humans continue to drive the implementation of these technologies, mistakes will be made. Placing fragile human psyches in extreme combat situations will always generate unpredictability—and can trigger
PTSD, severe emotional distress or full psychotic breaks in the process. As long as human beings conduct war, these errors must be factored in.
Until artificially intelligent systems can mimic the capability of the human brain, we won’t see unmanned systems entirely replacing human soldiers, in person or as decision-makers. Even highly intelligent machines can have glaring faults. As
Peter Singer pointed out, during
World War I, when the tank first appeared on the battlefield, with its guns, armor and rugged treads, it was thought to be indestructible—until someone came up with the antitank ditch.
Afghanistan’s former minister of defense
Abdul Rahim Wardak, whom we met in Kabul shortly before he was dismissed, chuckled as he described how he and his fellow mujahideen fighters targeted
Soviet tanks in the 1980s by smearing mud on their windows and building leaf-covered traps similar to the ones the
Vietcong used to ensnare American soldiers a decade earlier. In a modern parallel,
Singer said, “
robots our soldiers use in
Iraq and Afghanistan [employ] an amazing technology, but insurgents realized they could build tiger traps for them—just deep holes that [they] would fall into. They even figured out the angle necessary for the incline so that the bot couldn’t climb its way out.” The
intelligence of these robots is specialized, so as they are tested in the field, their operators and developers will continually encounter enemy circumventions that they did not expect, and they’ll be forced to evolve their products. Asymmetric encounters in combat like these will continue to pose unpredictable challenges for even the most sophisticated of technologies.
Human intelligence contains more than just problem-solving skills, however. There are uniquely human traits relevant to combat—like judgment, empathy and trust—that are difficult to define, let alone instill in a robot. So what is lost as robots increasingly take over human responsibilities in battlefield operations? In our conversations with Special Forces members, they emphasized the supreme importance of trust and brotherhood in their experiences in combat. Some had trained and fought together for years, coming to know each other’s habits, movements and thought patterns almost instinctively. They described being able to communicate with just a look. Will robots ever be able to mimic a human’s ability to read nonverbal cues?
Can a robot be brave? Can it selflessly sacrifice? Can a robot, trained to identify and engage targets, have some sense of ethics or restraint? Will a robot ever be able to distinguish between a child and a small man? If a robot kills an innocent civilian, who is to be blamed? Imagine a standoff between an armed ground robot and a six-year-old child with a spray-paint canister, perhaps sent out by an insurgent group. Whether acting autonomously or with human direction, the robot can either shoot the unarmed child, or be disabled, as the six-year-old sprays paint over its high-tech cameras and sensory components, blinding it. Faced with this decision, if you were commanding the robot, Singer asks, what would you do? We can’t court-martial robots, hold them accountable or investigate them. Accordingly, humans will continue to dominate combat operations for many years to come, even as robots become more intelligent and integrated with human forces.
The advent of virtualized conflict and automated warfare will mean that states with aggressive agendas will have a wider range of tools available to them in the future. Interventions by other actors—citizens, businesses and governments—will diversify as well.
For states, the U.N. Security Council will remain the only international body that is both inclusive of all nations and capable of bestowing legality to state-led military interventions. It’s unlikely that the international community will stray far from the great power dispensation of 1945 that established the United Nations, even with the vociferous calls of empowered citizen populations increasing the pressure on states to act. New mandates and charters for intervention will be almost impossible to pass given the fact that any amendment to the U.N. charter requires 194 member-nations to approve.
But there are areas of high-level statecraft where new forms of intervention are more viable, and these will take place through smaller alliances. In an extreme situation, we foresee a group of countries, for example, coming together to disable an errant country’s military robots. We can also imagine some member-states of
NATO pushing to establish new mandates for intervention that could authorize states to send combat troops into conflicts to establish safe zones with independent and uncompromised networks. This would be a popular idea within intervention policy circles—it’s a natural extension of the
Responsibility to Protect (RtoP) doctrine, which the U.N. Security Council used to authorize military action (including air strikes) in Libya in 2011 that NATO subsequently carried out. It’s very possible that we will see NATO members contribute drones to enforce the world’s first unmanned no-fly zone over a future rebel stronghold, which would not involve sending any troops into harm’s way.
Beyond formal institutions like NATO, the pressure for action will find other outlets in the form of ad hoc coalitions involving citizens and companies. Neither individuals nor businesses are able to muster military force for a ground invasion, but they can contribute to the maintenance of the vitally important communications network in a conflict zone. Future interventions will take the form of reconnecting the Internet or helping a rebel-held area set up an independent and secure network. In the event of state or state-sponsored manipulation of communications, we’ll see a concerted effort by international stakeholders to intervene and restore free and uninterrupted access without waiting for
It’s not the connectivity that is crucial per se (civilians in conflict zones might already have some form of communications access) but rather what a secure and fast network enables people to do. Doctors in makeshift field hospitals will be able to coordinate quickly, internally and internationally, to distribute medical supplies, arrange airdrops and document what they’re seeing. Rebel fighters will communicate securely, off the government’s telecommunications network, at ranges and on platforms much more useful than radios. Civilians will interact with members of their families in the diaspora on otherwise blocked platforms and use safe channels—mainly an array of proxy and circumvention tools—to send money in or information out.
Coalitions of states could send the equivalent of Special Forces troops to help rebel movements disconnect from the government network and establish their own network. Today, actions like these are taken but in independent fashion. A group of
Libyan ministers told us the story of a brave American soul called
Fred who arrived in the rebel stronghold of Benghazi in a wooden boat, armed with communications supplies and determined to help the rebels build their own telecommunications network. Fred eliminated the Gadhafi-era wiretaps as his first task. In the future, this will be a combat operation, particularly in places not accessible from the sea.
The composition of intervening coalitions will change in turn. States with small militaries but strong technology sectors will become new power players. Today,
Bangladesh is among the most frequent contributors of troops to international peacekeeping missions. In the future, it will be countries with strong technology sectors, presently including
Chile, who lead the charge in this type of mission. Coalitions of the connected will bring the political will and digital weaponry like high bandwidth, jerry-rigged independent mobile networks and enhanced cybersecurity. Such countries might also contribute to military interventions, with their own robot and aerial-drone armies. Some states, particularly small ones, will find it easier, cheaper and more politically expedient to build and commit their own unmanned drone arsenal to multilateral efforts, rather than cultivating and deploying human troops.
NGOs and individuals will also participate in these coalitions, each bringing something uniquely valuable to the table. Companies can build
open-source software tailored to the needs of the people inside a country, and offer free upgrades for all of their products. NGOs can coordinate with telecoms to build accurate databases of a given population and its needs, mapping out where the most unstable or isolated pockets are. And citizens can volunteer to test the new network and all of these products, helping to find bugs and vulnerabilities as well as providing crucial user feedback.
No matter how advanced our technology becomes, conflict and war will always find their roots in the physical world, where the decisions to deploy machines and cyber tactics are fundamentally human. As an equal-opportunity enabler, technology will enhance the abilities of all participants in a conflict to do more, which means more messaging and content from all sides, greater use of
robots and cyber weapons, and a wider range of strategic targets to strike. There are some distinct improvements, like the accountability driven by the permanence of evidence, but ultimately technology will complicate conflict even as it reduces risk on a net level.
Future combatants—states, rebels, militaries—will find that the tough ethical, tactical and strategic calculations they are used to making in physical conflicts will need to account for a virtual front that will oftentimes affect their decision-making. This will lead aggressors to take more actions in the less risky virtual front, as we described earlier, with online discrimination and hard-to-attribute cyber first-strike invasions. In other instances, the virtual front will act as a constraining force, leading aggressors to second-guess the degree of their aggression on the physical front. And as we will see even more clearly in the following pages, the mere existence of a virtual front paves the way for intervention options that are still robust, but minimize or reduce altogether the need to send troops into harm’s way.
Drone-patrolled no-fly zones and robotic peacekeeping interventions may be possible during a conflict, but such steps are limited. When the conflict is over, however, and the reconstruction effort begins, the opportunities for technology to help rebuild the country are endless.
If such an exception was made for the Israeli ultra-Orthodox on religious grounds, what kind of precedent would it set? What if the ultraconservative
Egypt followed suit, demanding a special white-listed Internet?
In policy circles, this is known as the
CNN Effect, and is most frequently associated with the 1992–1993 U.S. intervention in
Somalia. It’s widely believed that the images broadcast on television of starving and desperate Somalis prompted
George H. W. Bush to send in military forces, but when, on October 3, 1993, eighteen Army Rangers and two
Malaysian coalition partners were killed and the images of one of the Americans dragged through the streets in Mogadishu reached the airwaves, the American forces were withdrawn.
There is a start-up today called
Storyful that does this for many of the major news broadcasters. It employs former journalists and carefully curates content from social media (e.g., by verifying that the weather in a
YouTube video matches the weather recorded in that city on the day the video was supposedly shot).
Computer enthusiasts will remember this agency’s central role in creating the Internet, back when the agency was known as
Advanced Research Projects Agency (ARPA).
Two PackBots were deployed during the
Fukushima nuclear crisis following the 2011 earthquake in
Japan, entering the damaged plant, where high radiation levels made it dangerous for human rescue workers, to gather visual and sensory data.
Singer’s statement was corroborated by several active-duty Special Forces soldiers we spoke to.