Author: Marc-André Ryter In the coming years, cyberspace will be confronted with the development of artificial intelligence. This will greatly increase the potential effectiveness of cyber attacks and will thus force all cyberspace users to adapt their protection. The armed forces in particular will face considerable challenges in maintaining their operational capabilities, although they will also be able to benefit
from artificial intelligence in many areas.

Introduction

The purpose of this article is to demonstrate the importance of cybersecurity as a protection against the increased risks that will be generated by the development and integration of artificial intelligence (AI) in cyberspace. The article focuses on the technical dimension of the ongoing evolution, and what it implies for the armed forces. However, certain ethical considerations or considerations concerning the place of man in a world of machines in the philosophical sense will be unavoidable. What is happening in

cyberspace can lead to fantastic progress but can also become a major threat at any time. The risks involved are likely to jeopardize international peace and security. This evolution is inevitable and a step backwards is unthinkable as such. The engagement of new technologies takes place in all areas of individual and societal life1 ”and the possibilities of AI are such that the benefits will justify many compromises. Progress in information technology is both hopeful and dangerous. The new tools can promote freedom and development, progress and well-being, but also oppression, fraud, manipulation and control. Technological advances will benefit everyone, from the powerful to the weak, from legal authorities to criminal groups. As a result, hierarchies and authority can be strengthened or weakened2. This will have a significant impact on the nature of future
conflicts. If we want to control the growth of new technologies and especially their applications, it is important to anticipate and integrate risks and opportunities. AI will revolutionize the perception and implementation of cybersecurity, which will become a collective and permanent task. States and critical infrastructures are already today victims of attacks aimed at obtaining all kinds of advantages, whether political, military or economic. The digital space is therefore already a a real space for confrontation3
and cyber attackshave become the daily lot of a kind of new cold war4. Societies as a whole, due to their growing interconnections, will become increasingly vulnerable. Conflicts between States, companies or individuals could take forms that have hitherto been ignored. But one thing is certain, whether national or international, political or economic, conflicts will have

BIO

With a university diploma in Security Policies, after graduating in Political Sciences, Col. Marc-André Ryter is a collaborator of the General Staff of the Swiss Army, for all topics concerning the military doctrine and, since 2018, Head of the Department of Military Constructions. He folows and studies all the technological evolutions which may prove relevant for the armed forces and for different operation fields,
with a particular aim to adapt the military doctrine.

an increasing cyber dimension. The actor who succeeds in gaining the upper hand in the network will gain the upper hand over his opponent5. Cyber attacks will become both more accurate and faster. Cybersecurity that must be developed in response to this evolution must be seen as a construction of multiple and complementary elements to create a global  security architecture for cyberspace. Military capabilities in this area can and should be an important part of this architecture. The scope of digital threats, especially when they work with AI, is not yet well understood. For the time being, it is usually necessary to extrapolate the risks for the armed forces from the risks for civilian areas, in order to imagine what bellicose actions an aggressor could take. That is why the armed forces, including the Swiss army, must be at the forefront of cybersecurity. This can be considered as the first level of deterrence, in the sense of preventing an invasion of the territory by foreign armed forces. The latter could renounce their ground actions if they have been unable to weaken or paralyze the armed forces of the targeted country in advance of operations. Resilience, which aims at a rapid recovery of the capacity for action, must allow the commitment of means after or despite having been subjected to a cyber attack and having suffered some damage. It is therefore clear that the armed forces have a considerable interest in contributing to cybersecurity. The new technologies they will use in the future must be secure because they must be able to fulfil their mission, even in a degraded cyber environment.

Issues related to the development of artificial intelligence The central challenge will therefore be to know how to manage the current evolution and its consequences for the armed forces, since this evolution is inevitable. Paradigm shifts must be identified, which will effectively change the nature of societal life and relationships between countries. The characteristics of tomorrow’s world will be volatility, uncertainty, complexity and ambiguity. Power will probably be projected in a new way and conventional armed conflicts as we know them today, will in the future only be peripheral phenomena. The challenge will be to find a way to make the most of the introduction of new technologies while protecting oneself from the major risks they create. In this context, cybersecurity will become a global responsibility of the State and all other actors who can potentially benefit from opportunities in cyberspace, or who take advantage of its proper functioning.

The original article by Colonel Marc-André Ryter, written in French, was published in the Military Power Revue of the Swiss Armed Forces (No 2/2018, pp.18-28). The Swiss Army has granted our publication the honour of translating it exclusively into English. We would like to extend our most sincere thanks to Divisionaire Urs Gerber, editor-in-chief of the Military Power Revue, and to Colonel Marc-André Ryter for his availability and support. All volumes of the Military Power Revue of the Swiss Armed Forces can be downloaded free of charge from the Swiss Army website: http://www.vtg.admin.ch/en/media/
publikationen/military-power-revue.html

Overall, the main threat comes from the fact that all actions launched in cyberspace can lead to concrete damage in physical spaces and affect the civilian population, or even cause casualties. Threats that do not yet exist and will result from the development of new technologies and AI must be identified. The cyber threat must be placed on the same level as the threat from ballistic missiles or international terrorism, not to mention that these threats can be combined. Non-state actors will have the capacity to produce the same effects as state actors. However, it must be borne in mind that attacks that have the potential to undermine national security and the stability of a state require considerable
resources.

Figure 1:
Example of alarge data storage centre in Iceland, which shows that cyberspace is based on physical infrastructure that can be targeted. (https://www.cio.com/article/2368989/data center/100152-10-more-of-the-worldscoolest-data-centers.html#slide8, consulté le 20.07.2018)

Systems must be protected as a whole. Without protection, an attacker can disrupt or interrupt many essential services for the population, such as water or electricity distribution, or sales in general. This can quickly lead to major social unrest. It is therefore not surprising that Russian cyber attacks in Ukraine in 2014 targeted power plants, banks, hospitals and transport systems, as well as the conduct of elections. Systems theorists assume that if 37% of an infrastructure is destroyed, it no longer works. According to the ”Revue Stratégique”, it is possible to assess the gravity of an attack according to 4 criteria7. First, it is necessary (1) to assess the damage that could be done to the country’s fundamental interests before (2) to analyse the resulting violations of internal security. It is then necessary to (3) estimate the damage to the population and the environment and finally (4) those to the economy. It can be said at this time that AI will certainly play an important role in security, both in terms of defence capabilities and those required for the attack. For some safety tasks, AI will most likely exceed human capabilities and lead to new breakthroughs. AI will also be able to control some new technologies to an extent that is beyond the capabilities of humans. The role of AI for machine learning / deep learning is essential. Rapid progress in a very wide range of applications is achieved through automatic learning, much more than through programming. The AI will allow a fusion of real and virtual worlds in order to obtain a better representation of the environment. This opens up new training opportunities based on the increased ability to collect data, analyse it very quickly and use it immediately.

Figure 2: Improved image analysis capabilities with AI, which will allow for better preparation of operations. (https://publiclab.org/wiki/near-infrared-camera,
consulté le 30.05.2018) It is to be expected that AI will increase threats in 3 different ways: it will expand and diversify existing

threats, it will introduce new threats and it will change the character and nature of known threats8. Attacks will be more effective, more precise, more difficult to attribute and will systematically exploit all vulnerabilities. They also provide capabilities for the armed forces, such as the identification of human targets (military commanders) or autonomous navigation. It will be possible to create disinformation campaigns on a scale and with a likelihood that is still unknown.

Figure 3: Improvement of the possibilities of creating computergenerated images for misinformation. The original is on the left, the image created on the right. (https://interestingengineering.com/ai-software-generaterealistic-fake-videos-from-audio-clips, consulté le 30.05.2018)

Consequently, increasing efficiency will be one of the major characteristics of future attacks in cyberspace. It also means that they will be better and very carefully prepared9 . Thanks to the AI’s commitment, attacks will be personalized and tailor-made to achieve the desired goal. Faced with this new potential, interoperability between humans and machines will be a major challenge. A major challenge will be to find the best combination between man and machine. A certain level of simplicity must be maintained so that the information can be used quickly by humans. Too much information can at one point prove counterproductive, at all levels, in all functions, and therefore also for the military. A kind of machine warfare, systems against systems, is not science fiction in the field of cybersecurity. Attacking machines must simultaneously be able to defend themselves against extremely fast counter-attacks from previously targeted systems. Three main factors concerning robots are relevant to safety10. First, the development and deployment of robots, which are becoming widespread and a global phenomenon. Secondly, the adaptability of robots for the most varied uses, which is a real problem for safety, and finally autonomy, which is the most risky factor. It is indeed possible that the machine may want to free itself from the soldier’s control, and want to accomplish the mission in the way that seems most rational to it. Human control could therefore appear to be the disruptive and dangerous element that must be eliminated to accomplish the mission. One of the greatest dangers of this evolution is that some decisions requiring very rapid and unquestionable reactions could be elegated to AI, such as the engagement of missiles or nuclear weapons11. AI will also increase the effectiveness of APT (Advanced Persistent Threats) cyber attacks, which are the most commonly used for industrial espionage. These tools are very useful for spying on classified armed forces networks because they are difficult to find. Thanks to these APTs, non-state actors will increasingly have the capacity to produce the same effects as state actors in the field of espionage. But AI also has a positive impact on increasing safety. It opens up new perspectives on the ability to detect, counter and respond to cyber attacks, even when the vector of attacks is unknown until now. It is already involved in detecting anomalies and malware in cyberspace.

Figure 4: Evolution of the complexity of the threat through APTs, with their components and phases.

Impact on armed forces

In front of the considerable potential threat posed by the use of AI in
cyberspace, the armed forces must be ready to support civilian authorities
in the event of a major destabilization of society. In most cases, the risks to
the armed forces must be extrapolated from the risks to civilian areas, in
order to imagine what malicious actions an aggressor could take. A secure
environment in cyberspace will be needed so that positive developments
can be integrated and their potential does not generate disasters.
Considering the potential opportunities and risks, it is a question of defining
the best way for the armed forces to take advantage of opportunities while
protecting themselves from risks. First and foremost, they must ensure their
proper functioning in order to provide security services for the benefit of
the State.

Figure 5:

Example of cyber specialists from the Israeli armed forces (IDF). (https://www.blick.ch/storytelling/2018/zukunft/index5.html, consulté le 08.08.2018) A new debate will certainly begin. With regard to conflict and combat, ongoing developments and new technologies will emerge as means of causing more damage through greater accuracy and efficiency at the goal. These new systems will make it possible to destroy the opponent’s means more effectively. In this logic, new technologies are only used as vectors to improve what is already possible today. If we change paradigm, it is obvious that we must ask ourselves if an enemy can be defeated without even fighting it, by other means. Penetrating a network may even be more beneficial than introducing a new weapon system. The physical destruction of the adversary would thus be neither a necessity nor an end in itself and the very nature of war will not only evolve, but completely change. AI and cyber weapons have a high degree of flexibility that will allow their engagement to produce a

very wide spectrum of effects. Power will be projected
in a new way and conflicts as we know them will tend
to become peripheral phenomena, most often led by
intermediaries. In this kind of new struggle, the possible
consequences for civilian populations are not yet clear.

Evolution of the nature of conflicts and of armed
forces

As a result, many questions arise about the changing nature of conflicts. The most fundamental question is whether cyber warfare will become the war of the future, even more so than the confrontation of robotic weapon systems. This would have very important consequences for the doctrine. The adversary could be defeated by actions in cyberspace that would deprive him of the use of all his critical infrastructure and prevent him from acting. Secondly, it is not clear that nuclear weapons and weapon systems with the most devastating effects will still be of any use. It may be possible to block the ability to engage these weapons through cyberspace, or to render them ineffective. Similarly, cyber weapons could, if necessary, cause as much damage as nuclear weapons. In such a case, the ability to develop actions in cyberspace will become the key to success, a kind of deterrent to the future. This implies that the intimidation of an adversary and the demonstration of force will take place via cyberspace, and via actions that in a first phase at least would probably have little or no impact on civilian populations. But in the case of paralysis of entire sectors of a country’s economy, the voluntary outbreak of technological or ecological disasters, with many victims, or even during gross manipulation of the democratic process, acts of war are likely the way those events will have to be named. Defence in cyberspace must therefore be part of the toolbox of all armed forces if they are to fulfil their mission of defending and protecting civilian populations. The cyber dimension is an increasing dimension of all armed forces’ commitments, from planning to conduct, infrastructure and processes. All elements of cyberspace play a role as multipliers of the forces involved12. Of particular interest is the fact that cyber attacks are not only accurate, but also very fast (speed of light). Western armed forces, which make extensive use of new technologies and are deployed around the world, are constantly confronted with the risks associated with the use of cyberspace-based technologies. Such use is often a prerequisite for the conduct of their operations, including peacebuilding or humanitarian commitments in remote areas. The importance of cyberspace in the planning and conduct of operations is often underestimated despite the fact that many military objectives can be achieved

there. First, it is possible to disrupt and interrupt data exchanges (communications of all kinds), prevent access to data, corrupt data or damage it by infecting it or rendering it unusable. But the objective may be more simply to destroy certain data in computers or even complete databases or to block networks. In general, it is possible to consider the implementation of malware in an opponent’s computer systems as insurance in the event of an attack, respectively as a long-term preparation for cases of disputes. In this case, there is a strong interest in not detecting the attack. On the other hand, if a state uses weapons on a large scale in cyberspace, with the intention of causing significant damage to its opponent, it does so with a certain intention, and therefore does not necessarily have an interest in remaining hidden. On the contrary, he will reveal himself in order to obtain what he is looking for13.

Artificial intelligence as a challenge to the armed
forces

Attacks in cyberspace will increasingly be used because time, distance, speed and tempo no longer play a major role, no longer constitute constraints, and because they can also be automated. It is possible that the effects produced by actions in cyberspace against unconventional opponents may be more difficult to achieve, or more limited, if these opponents do not base their conduct and weapons systems on computer networks. In the case of Ukraine in 2014, the maintenance of some manual controls made it easier to react after a cyber attack on the power generation infrastructure. The cyber attacks on Estonia, Georgia and Iran were not perceived by these countries as a military attack as such and therefore had no impact on their armed forces. But this is a matter of interpretation, since the destruction of key capabilities may appear to be an attack on the entire country. It is also necessary to consider whether the importance of cyberspace differs throughout the phases of a conflict or whether, on the contrary, this importance persists, possibly even after the end of hostilities. It would be wrong to consider cyberspace as an operating space used only or mainly during the initial phases of a conflict, and to think that the opportunities and thus the importance of cyberspace decrease during the conflict. Despite the above considerations, there are still some authors who believe that the threat of cyberwar is not a very serious threat to the armed forces. They perceive actions in cyberspace as parallel measures to the traditional or even hybrid conflict, with exchanges in the cyber domain being described as skirmishes. That is why it is necessary to look in detail at the impact that AI is likely to have on the armed forces. The revolutionary security consequences of AI will force the armed forces to make

an adaptive leap. They will no longer be able to be satisfied with qualitative evolutions to adapt to new products. In particular, AI and e-learning will have decisive implications for cybersecurity and cyberwarfare. As far as the conduct of a conflict is concerned, a qualitative leap must be considered at least equivalent to the one that occurred with the emergence of aviation. More than in the conduct of dynamic combat, armed forces with AI assistance will have an important advantage in fulfilling their mission of societal security. AI will analyze and merge the huge volume of data. In the field of intelligence in particular, activities that currently require many people can be automated. The armed forces will have an increased capacity to collect data, analyze it very quickly and use it immediately. The struggle for superiority in the information space will become fierce, because it will be decisive. AI will generate training opportunities in worlds that combine real information and virtual elements and will allow an unparalleled level of operational readiness, which can also be done remotely. AI can thus prove to be a valuable instrument in the initial phase of conflicts, or when an actor is unwilling or unable to commit troops to achieve its objective. Espionage of opposing systems, as a preparatory measure for combat, will become increasingly important and will be the first step in a war. This information gathering will serve as a basis for subversion activities, such as mobilizing for political purposes and influencing public opinion, including manipulating information. It will also be used to prepare for the sabotage of facilities, weapons systems, critical infrastructure, and the disruption or modification of the nature of communications14, all actions that weaken the adversary and thus prepare the battlefield. On the other hand, it would be wrong to predict that all wars will systematically begin with a cyber war phase since isolated actors will always be able to act directly by resorting to violent actions. In the field of materials and equipment in general, AI will also have an impact. It will make it possible to control new technologies much better than humans would. New robot capabilities combined with artificial intelligence will allow hits to the nearest centimetre15. The programming of sensors, especially when they can then trigger an automatic response, will be improved and become crucial and sensitive. Automation coupled with AI will allow the development of “fire and forget” with applications to a very large number of systems, hence the need to develop in parallel the possibilities of human control over the engagement of these weapons. The armed forces will want to make full use of the advantages of machines in the conduct of combat. At the same time, we are also talking about materials that are not only lighter and more insulating, but that could also change shape and color. If these materials are combined with artificial intelligence, there is great potential for their use, particularly in camouflage, but also to improve soldiers’ performance.

The possibilities of Artificial Intelligence in conducting military
operations

The combination of the mass of available data and AI in cyberspace will weaken the will to resist from a psychological point of view. This capability will be useful for both defence and attack. In general, the design of attacks will increasingly be precisely tailored to the objective in order to ensure its effectiveness. Thus, in the field of misinformation, the possibilities will be increased and the dissemination will be done at a very fast rate. Therefore,

AI will become an essential support for the conduct of operations, mainly because of its high analytical potential. AI opens new dimensions to these potential improvements, especially in the search for vulnerabilities of opposing systems.

Figure6:

More accurate and automated interpretation of military reconnaissance photos (https://www.38north.org/2017/10/udmh102517/, consulté le 30.05.2018) It will also make it possible to develop support for combatants through autonomous systems. The extension of the ability to carry out missions autonomously will be a feature of the future form of combat. The human, through AI, will try to react to changes faster and better than the opponent, which would bring him substantial gain on the battlefield. AI will make it possible to simultaneously fight a multitude of systems, in both the real and virtual worlds, and to counter a succession of close and different types of attacks. It will be able to coordinate defences, prioritize and act extremely quickly. Depending on the objectives, AI will allow a better analysis of potential targets, mainly by cross-checking information. This also applies to image analysis connected to other data, for example, to identify critical infrastructures and the best time to attack them, either through cyber attacks or conventional attacks. But AI can also support troops in less technical aspects, for example in the field of psychology, with the analysis of violent content, which undermines the morale of the troops. In the field of physical protection, it reduces risks, particularly for reconnaissance in enemy areas or for mine clearance, which can be carried out by robots. It will be possible to preserve human beings by assigning 3D tasks (dirty, dull, dangerous) to machines equipped with AI. Decision-making can also be improved with AI, as can the connection of different units, operational simulation for operations preparation, increasing the efficiency and performance of combat systems and facilitating logistics support functions, especially in maintenance. As a result, computers with AIs will increasingly become weapons systems as such. Cyber weapons will thus become a kind of super weapons of disruption16 and weakening the target. However, there will always be a big difference between security, which can be improved in urban areas or certain infrastructures such as stations

or airports, where a large number of sensors such as surveillance cameras are installed and whose images are available and can be analysed, and dynamic combat in an environment where no fixed sensors are available. During an operation, the large-scale installation of sensors will continue to be an obstacle given the scale of the action, which will always leave room for surprise. Due to the increasing integration of AI into armed forces systems, the battlefield will undergo several changes. First, there will be a large-scale replacement of humans by autonomous robots in all areas, from logistics to combat robots. Of course, these robots will have different degrees of autonomy depending on their tasks. It will be a matter of determining for each task what minimum degree of human control will be required. In the long term, combat aircraft will become partially obsolete and can be replaced by swarms of drones for certain tasks. This trend will be supported by the sharp drop in the cost of drones. For ground forces, a parade will have to be found so that they can react to an attack by hundreds or thousands of drones. Similarly, improvised explosive charges will become precision weapons, multiplying the potential of any terrorist group, for example with a suicide bomber car travelling autonomously. The potential of armed groups to destabilize a society as part of a hybrid strategy will also be increased. These groups could use robots to carry out large-scale automated assassinations of military, political and economic officials. However, the AI will also be able to support the disarming of these
groups by identifying suspicious transactions and, more generally, by identifying potential terrorists through the analysis of clue bundles. Passive and active defences can also be improved for high-value targets such as critical infrastructure (armour, active defences, UAV radar, etc.). The more or less automated learning of AI is a central subject, which can prove to be a great vulnerability, as soon as an aggressor succeeds in causing erroneous learning. This can be achieved by adding disruptive elements, such as false information on which learning is based or by slightly changing the machines’ perception of the environment to cause different behaviors, potentially dangerous.For example, friend and foe recognition can be reversed in the learning phase or autonomous weapons systems can be taught to target civilians specifically. Automated learning will become a target in itself, as it will be interesting to sabotage the opponent’s potential from this phase, to slow down the improvement capabilities of autonomous weapon systems and to spy on or prevent the assumption of more and more important functions by robots with AI17. Despite this, AI will force us to give more autonomy to the machines, in order to allow them to use their potential, especially with regard to the speed of their

reactions. In this sense, the human being will become a slowing factor. If we limit the autonomy and speed of the machines to guarantee better control, we take the risk that the opponent will be faster. There will therefore be no choice and there is a real risk that humans will become spectators of interactions between machines, up to and including battles between machines. The example of drones is revealing. The exponential increase in their number for the most diverse tasks is problematic because it will increase the risk of uncontrolled actions. Ultimately, it will be a matter of identifying weapon systems that cannot operate without AI support, in order to take appropriate protective measures and ensure redundancy of these systems.

The vulnerabilities involved in artificial intelligence

The increasing integration of AI into the systems and processes of armed forces also raises many questions about the possible negative effects and the measures needed to keep risks under control. The ultimate aim is to prevent military systems or processes from being diverted from their primary use. In a very general way, the question is how a degraded cyber environment can interfere with the ability of armed forces to accomplish their mission, from mobilization to combat. It is possible that technological developments, or their integration into the armed forces, may be hindered because of the risks involved. As soon as we talk about more or less automated armaments and control, we must be able to guarantee that control is maintained, or that a takeover or reprogramming by the adversary is excluded. It will therefore be particularly important to strengthen the security and protection of systems18. Vulnerabilities will be actively sought and inevitably attacked. There will be significant but unavoidable constraints when integrating AI systems into the armed forces. An important challenge will be to find the best combination between man and machine, in order to use the potential of AI and synergies in the best possible way. It is a question of combining in the fastest possible way the identification of opportunities and the decision on the best option for action, in any case faster than the opponent, and this in a repeated way19. Above all, it is important to avoid that the machine neutralizes the human or its control functions if it does not agree, for whatever reason, with the human’s decision. This will certainly not always be easy, especially in areas where AI will surpass human intelligence. Humans should strive to keep the safety of populations at the core of AI’s concerns, and this in all circumstances. For the time being, AI still depends on learning factors that are at least initiated by humans. The ability of the AI to react and resolve new and unexpected situations, which

is essential for the armed forces, will be a priority area for development. However, this shows that it will be necessary to develop a new specific arms control system for autonomous weapons, especially for those with lethal effects. We must ensure that humans can urgently disable an autonomous system that deviates from its mission. AI could cause unexpected problems due to unpredictable effects caused by the speed and accumulation of interactions. In the context of this inevitable evolution, the ultimate responsibility for the engagement of autonomous weapons must remain in any case with humans, and there must be no disempowerment20. One of the greatest dangers inherent in this evolution is that some decisions that require very rapid and unquestionable reactions could be delegated to AI, such as the engagement of missiles or nuclear weapons21. AI could thus lead to a new arms race, as all actors will want to improve their weapons systems with the potential of AI. The increasing use of AI will therefore be an inevitable evolution, for its benefits in the first place, but also with all the consequences in the field of protection. Humans will become less decisive, and at the same time, machines will dictate the tempo of combat in all operating areas and, above all, will directly influence the outcome. Humans will have to find a way to keep control over the process and possibly set limits on machines or their assistance. A war that uses selfguided missiles, unmanned battle tanks and armed drones, possibly guided from a distance, is an option that must be considered. We could see a kind of “dehumanization” of the wars of the future. In the conduct of operations, preventing communications has always played an essential role. But this has a positive impact only if you can at the same time ensure your own communications. It can therefore be assumed that the elimination of the opponent’s forces in order to win the decision will lose even more importance in favor of the ability to reach the heart of the enemy system, its centres of gravity, or to disrupt it in order to paralyze it. The ability to recognize windows of opportunity will be increasingly important and decisive.Figure 7:

Radars are used to continuously monitor activities, including attacks in cyberspace. (https://www.journaldugeek. com/2015/01/16/pal-lacyberguerre-cest-pour-demain/ Consulté 15.08.2018)

Windows of opportunity can be created by “blinding” or misleading the opponent. The communication systems of the opposing armed forces will therefore increasingly become essential critical infrastructures. Their attack and destruction would surely paralyze and thus defeat the adversary in a world where information literacy and the ability to manipulate it will represent power. Information space superiority will be achieved through an increased ability to collect data, analyze it very quickly and use it immediately on the battlefield. There will be more sources and it will be easier to spread false information or influence the opponent’s decisions. Propaganda, deception and social engineering, i.e. large-scale psychological manipulation, will be improved. This will also serve to confuse the opponent by disrupting his perception of the situation more surely than by destroying his equipment and weapon systems. Vertical command structures will

adversary. Due to the Swiss army’s commitments abroad, it will also be necessary to identify the possibilities and limits of actions in cyberspace against unconventional opponents. It will also be necessary to find the best possible way to ensure the cybersecurity of troops for engagement in unfavourable environments. In summary, there are 8 main developments, with the implications of cyberspace and AI, that will affect the Swiss army and that will therefore have to be monitored and integrated into the armed forces’ development planning: (1) the increase in the number of robots, in all fields, (2) the abandonment of certain weapon systems, replaced by new technologies, (3) new combat methods resulting from new technologies, (4) new weapons, (5) new power criteria and instruments, (6) the increasing autonomy of machines, (7) new problems related to machines, and finally (8) new targets.

Conclusions

Cyber defence has become an absolute necessity to protect not only the population and infrastructure, but also societal and democratic life. Cyber attacks are the main threat, although this threat has so far been masked by the visibility of terrorism. New vulnerabilities appear faster than old ones are eliminated. One of the major problems will be the cost of broad spectrum protection.

Figure 8:

Importance of infrastructure for the proper functioning of cyberspace, such as networks transiting through Iceland. (https://askjaenergy.com/2013/02/11/ data-centres-in iceland/, consulté le 20.07.2018)

Developments in cyberspace and the increasing use of AI in this operating space will change the parameters to which we are accustomed. This will be true for both civilian and armed forces aspects. New threats in cyberspace do not replace the old ones, but are in addition to those already known that will remain as possible instruments in the range of actions, with priority given to non-state actors, such as attacks of all kinds, hijackings of planes, boats, etc. But the armed forces are only one of the partners and can contribute to the management of a cyber crisis with their resources, as far as legal provisions allow. They therefore do not have a leading role in this area, respectively should not if civilian bodies fulfil their role. This does not diminish the need for them to ensure their ability to deliver the expected services through effective protection of their systems and thus to quickly reconsider priorities for resource allocation. The level of security of state systems, including those of the armed forces, is a central issue. A high level is essential. AI will create new opportunities, make them affordable for more actors. Geopolitical characteristics such as the size of the territory, population and natural resources will no longer be the only attributes of a country’s power. The revolution generated by new technologies will spread and redistribute power, most often to the benefit of smaller and currently weaker players24. Smaller AI-advanced states will have a greater impact. The main danger will undoubtedly be an imbalance in resources, which could push a State to an aggressive attitude if it is almost certain to win or to have a superiority in cyberspace. There will also be a need for a balance in the area of cyber weapons and AI support that will allow, like nuclear deterrence, a new form of cyber deterrence. Regulation of cyberspace is therefore essential to foster cooperation and enable the repression of abuses. States will have to equip themselves with the means to play a normative role in cyberspace, which seems rather difficult at the moment. If the gap between rich and poor states continues to widen, major crises are to be expected. Poor countries will increasingly have the means to react against abuses of the global economic system and will be able with relatively limited means to launch aggressive actions in cyberspace against those perceived as oppressors or profiteers. This could generate significant risks of destabilization and conflict. The issue of liability must be resolved as soon as possible. In the event of a major problem, and this because of the difficulty in identifying an aggressor, it will be necessary to be able to attribute responsibility for the facts. This could be the owner, the programmer, the builder, the broadcaster, or even possibly the state. This is one of the most important open questions in an increasingly automated world. The difficulty of attribution is a central problem, although it can also be considered as a factor calming the game, since it forces the attacker to remain measured in his reaction. However, this will also require a solution, as attacks will always be more effective, more precise, more difficult to attribute and will target all vulnerabilities. There will also be more actors capable of carrying out such attacks. Improved safety will not only result from the need to do things better, but also to do them differently. Behavior modification to reduce and possibly eliminate risks will be necessary. Experimentation is a phase that will become increasingly important, both to verify possible security gains and to search for vulnerabilities in order to eliminate them. Thanks to its growing autonomous learning capabilities, AI could in the fairly near future be able to develop and ensure the cybersecurity of a system in a completely autonomous way, i.e. without any necessary human input. However, humans will retain a role in the machine learning process. A lack of interaction could lead to the development of machines in the wrong direction and could thus reduce the effectiveness of safety. The importance of the human being, and therefore of the soldier, will not disappear, as will the many fears that accompany the development of AI.

The real risk would be to separate humans and machines, which would inevitably lead to conflicts. It will be essential that military systems become sufficiently secure to compensate for the widespread increase in the possibilities of the digital threat. It is imperative that all concerned are aware of the seriousness of the threat. Risk in cyberspace is a systemic risk and is not limited to potential targeted or limited attacks. In this context, cybersecurity is a deterrent not only against cyber attacks, but also against attacks in other operating areas, since the weakening of an adversary in cyberspace can also be used as a preparation for an open conflict. It will take getting used to an environment where cyber attacks will be commonplace and systematically attempted, which could jeopardize the stability of the entire cyberspace. As a result, there may well be more and more clashes between systems, all of which will be supported by AI, with some attempting to exploit each other’s vulnerabilities. Security in the virtual world will be the basis for protecting
the real world. However, it may be that the potential extent of the damage, including collateral damage, is such that it represents a factor that leads attackers to exercise caution, especially in the case of States. It is indeed very difficult to estimate the consequences of cyber attacks that would indiscriminately affect all computers and systems in a country, and whose collateral effects are by definition unpredictable. This danger would be quite conceivable with autonomous weapons based on AI and no more controlled by humans. The armed forces and the Swiss army in particular have no choice but to protect themselves effectively against cyber attacks and the spread of new technologies if they are to maintain their ability to fulfil their mission of protecting the population and the national territory. It is impossible to imagine armed forces that do not have a cyber component. Taking lightly the new challenges involved in the development of cyberspace and AI brings existential risks to societies. Measures can and should be taken, often also supported by AI, to strengthen security in cyberspace. There ar e six components for effective cyber protection: (1) prevention, (2) anticipation, (3) protection, (4) detection, (5) attribution and (6) response. The training of Internet users and the media is central. Anticipation implies knowing the threats, in order to be able to prepare to counter them, while protection implies all the measures to complicate the task

of the attackers. Detection must be made possible by measures taken within the systems themselves. These measures will become more important, although they may become more complicated with the development of cryptography. Allocation requires specific means of signal analysis. Finally, the reaction concerns above all the means of resuming and continuing activities, as well as the measures against the attacker itself.

1 Ryter, Marc-André: La 4ème révolutionindustrielle et son impact sur les forces
armées, MPR I/17, pp. 50-62.
2 Rid, Thomas: The Rise of the Machines, Scribe Publications, London, 2016, p. 298.
3 Ryter, op. cit., pp. 58-60.
4 Straub, Jeremy: Artifi cial Intelligence is the weapon of the next Cold War, The
Conversation, 29.01.2018, disponible sous https://theconversation.com/artifi cialintelligence-is-the-weapon-of-the-next-cold-war-86086, p.2
5 Anil, Suleyman: How to integrate cyber defence into existing defence
capabilities, in ANGELI, Franco (Dir.): International Humanitarian Law and New
Weapon Technologies, International Institute of Humanitarian Law, San Remo,
2012, pp. 152.
6 Anil, op. cit., p. 149.
7 Revue Stratégique de Cyberdéfense, SecrétariatGénéral de la Défense et de la
SécuritéNationale, Paris, 12.02.2018, disponible sous http://www.sgdsn.gouv.fr/
uploads/2018/02/20180206-np-revue-cyber-public-v3.3-publication.pdf, p. 81
8 Brundlage, Miles (et al): The Malicious Use of Artifi cial Intelligence: Forecasting,
Prevention and Mitigation, February 2018, 99 p., disponible sous: https://arxiv.
org/ftp/arxiv/papers/1802/1802.07228.pdf.
9 Ibid, p. 21.
10 Ibid, p. 39.
11 Straub, op. cit., p. 3.
12 Arquilla, John et Ronfeldt, David: Cyberwar is coming!, Comparative Strategy,
vol. 12, no 2, 1993, p. 39.
13, Roland: International Security: the Contemporary Age, Cambridge, Polity Press,
2013, p. 266.
14 Brundlage, op. cit., p. 43.
15 Rid, op. cit., p. 296.
16 Douzet, Frédérick: Cyberguerres et cyberconfl its, dans BADIE, Bertrand et VIDAL,
Dominique: Nouvelles Guerres, L’état du monde 2015, La Découverte, Paris,
2014, pp. 111-117.
17 Allen, Greg et Chan, Taniel: Artifi cial Intelligence and National Security, Harvard
Kennedy Scholl, Belfer Center for Science and International Aff airs, July 2017,
p. 46.
18 Villani, Cédric: Donner un sens à l’intelligenceartifi cielle: pour
unestratégienationale et européenne, Mission parlementaire du 8 septembre
2017 au 8 mars 2018, Paris, mars 2018, p. 221, disponible sous aiforhumnaity.fr.
19 Rid, op. cit., p. 300.
20 Brundlage, op. cit., p. 42.
21 Straub, op. cit., p.3.
22 Revue Stratégique de Cyberdéfense, op. cit., p. 52.
23 Straub, op. cit., p. 5.
24 Arquilla et Ronsfeldt, op. cit., p. 26.

 

SHARE

Other Magazines