“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

~ Sun Tzu, The Art of War

Author: Anastasios Arampatzis (AKMI Educational Institute) and Justin Sherman (Duke University)

In today’s increasingly connected world, we only become more vulnerable by the day. Cybercriminals, spies, and enemy nation-states are just some of the threat vectors that make up the modern-day cyber battlespace. Despite all of this, however, most security policies remain focused around technicalities and in doing so entirely neglect fundamental biases, both cognitive and cultural, that shape human behavior. In this paper we break down practical human security into three sections, using knowledge in cyber security, cyber psychology, systemic theory, social engineering, Bloom’s taxonomy, learning theory, behavioral economics, and the decision sciences to understand how organizations can design security around – and for – the human. We first understand the enemy, then understand the human, and then provide actionable steps for organizations to practically strengthen their cyber security postures.

Part I: The Enemy


The news are overwhelmed by stories of security incidents and data breaches that have exposed the secret, sensitive, and personal data of millions. While various cyber security professionals are characterizing 2017 as the worst incident year to date, predictions for 2018 are even worse. In addition to a rapidly-changing threat landscape, emerging technologies such as AI, machine learning and quantum computing are fundamentally changing the way we approach cyber security in the modern age – which makes staying “ahead of the game” even more difficult.

In reality, we can view the cyber landscape as a battlespace, where one key focus of conflict is information. Indeed, in today’s informationdriven society, power comes from the accurate and timely ownership and exploitation of such data.

The epicenter of this battlespace is us, the human being. Many reports identify us humans as the weakest security link in the cyber domain, vulnerable to countless methods of deceit and exploitation. As we are affected by the digital situations around us, our responses in the cyber realm are not instinctive and “logical,” but are instead fundamentally shaped by each of our individual beliefs, biases, and education.

Thus, we present our paper – centered around the human element of cyber security. Using Sun Tzu’s famous Art of War saying as foundation, we will first discuss the external environment, “the enemy,” that forms the human threat landscape; in our second section, we will analyze the heuristic biases and beliefs that shape human responses to these threats; and finally, in our third section, we will discuss ways to design around – and design for

– these heuristic biases and beliefs, in order to practically strengthen human cyber security.

The Enemy: Threats from the External Environment

If we consider the human being as a “system” within the context of systemic theory, the external environment provides inputs to the decisionmaking processes of every human. These inputs can be benevolent, but they can also be malicious and impair the way people are making decisions. These threats form the “enemy” that we humans need to fight against in order to reach better decisions for a safer cyber domain.

The Cyber Domain

As technology is evolving and disrupting our daily modus operandi, so is human behavior. Psychologists like Dr. Mary Aiken believe that people behave differently when they are interacting with the abstract cyber domain than they do in the real, face-to-face world. The term “cyber” here refers to anything digital, from Bluetooth technology to driverless cars to mobile and networked devices to artificial intelligence and machine learning. So: how is the cyber domain threatening human decision-making?

Threat number one: cyber safety is an abstract term. People can understand more easily and effectively the danger of driving drunk than the danger of having an unpatched personal or corporate computer connected to the internet. As a result, people often fail to recognize security risks or the information provided to cue them, and they tend to believe they are less vulnerable to risks than others. As Ryan West quotes, most people believe they are better than average drivers and that they will live beyond average life expectancy. People also believe they are less likely to be harmed by consumer products compared to others. Therefore it is reasonable to conjecture that computer users have the preset belief that they’re at less risk of a computer vulnerability than others. Further, the pro-security choice (i.e. encryption) often has no visible outcome, and there is also typically no “visible” threat (i.e. an email interceptor). The reward for being more secure, then, is that nothing bad happens – which by its nature makes it difficult for people to evaluate security as a gain when mentally comparing cost, benefits, and risks. In fact, if we compare the abstract reward (safety) of being more secure against a concrete reward like viewing an email attachment, then the outcome does not favor security at all. This is especially true when a user does not know what his or her level of risk is, or believes they are at less risk than others to start.

Cyber is addictive. A study has found that the average mobile phone user checks their device more than fifteen hundred (1500) times a week. This is enhanced by the very nature of the web. Internet is always there, open, 24/7, always full of promises, content and data. It is also full of intermittent rewards which are more effective for fostering addiction than continuous rewards. Do you remember the movie You’ve Got Mail with Tom Hanks and Meg Ryan? At some point Tom Hanks says that there’s nothing more powerful than the simple words “You’ve Got Mail.” This is the very essence of cyber addiction – we check our devices because sometimes we’re lucky enough to be rewarded with a notification When something is addictive, we make irrational decisions every time it’s involved in a set of choices. I search therefore I am; I get likes therefore I exist. We check our mail, now, and again, and again.

This leads us to another threat, the time we are spending online. When we’re checking our mobile phone or when we’re typing this paper, we are effectively in a different environment; we have gone somewhere else, just not in the physical world’s time and space. That’s because cyberspace is a distinct space, quite different from the actual living space where our families,

homes, and jobs are located. A lot of us have felt “lost in time” when we are surfing online, because we haven’t learned to keep track of time in the cyber domain. This fundamentally affects how we behave and make choices in cyberspace. (And as far as online security goes, more time only equals more risk.)

Finally, we must address the libertarian nature of the internet. Internet is designed to be free. But where does freedom end and totalitarianism begin? What and where is the frontier of freedom and corruption? Is “freedom of speech” in fact fake news or disinformation? And who decides that certain opinions are fake news, if anyone should decide that at all? Similarly, how does personal interest override that of the greater online community? The idea of freedom online is quite contentious, and it raises many ethical questions – but currently, though, very little regulation exists.

It should be clear that our environment has an impact on our decision-making processes. Our instincts have evolved throughout history to handle face-to-face interactions with other human beings, but once we are in the cyber domain, these instincts quickly fail us.

The Ever-Evolving Technology

As devices and gadgets change, the cyber environment changes with it, which impacts our behavior all over again. More changes lead to more new situations, only creating more confusion.

Until several decades ago, the pace of every technological revolution was such that it allowed humans to assimilate its changes and safely integrate them into their day-to-day activities. During the past twenty years, however, the evolution of digital technology has become so hectic that people cannot cope with it. We don’t have to name all the buzz terms that arise every single day to document this argument.

Even “digital natives,” to use Marc Prensky’s controversial term, feel sometimes helpless in the face of the ever-evolving technology. We haven’t yet found a pattern by which we can effectively leverage this technology for good; we are not sure how to functionally and safely use this new technology; and we are not sure what the long-term (and even shortterm) side-effects of these newborn technologies will be.

Certainly, there are good uses and practices for effectively integrating evolving technology into our lives, but there are bad practices as well that many of us follow. In addition, this technology has brought an unprecedented revolution in digital content creation. This raises many questions: What are the implications of exposing so much sensitive data online? Who can benefit from them? Can we protect our precious assets, or are we unlocking our homes to the worst criminals who can erase our lives with just one click? (Remember the movie The Net with Sandra Bullock?)

Technology is not good or bad in its own right; it is neutral, and it simply mediates, amplifies, and changes human behavior. It can be used well or poorly by humankind – and in many ways, it’s no different from how we regard driving cars or using electricity or nuclear energy. Any technology can be misused. Thus, the central question: what are the universal acceptable ethics for using cyber technology?

The Education

Education is obviously a factor that shapes human behavior. Many of us have read countless articles about the necessity of recurring education and the return of investing in it. On a macro level, a broad lack of education can result in ignorance, authoritarianism, or even anarchy. Our lack of comprehensive cyber education therefore presents major risks in the way people are perceiving cyber, its risks, and its threats.

Another issue is the effectiveness of cyber education that does exist. Education should aim to answer the “why” and not only the “how” of security. It should aim at deep learning and retention, and it should certainly be recurring. Unfortunately, this is not the case. Typical learning situations rely on positive reinforcement when we do something “right.” Simply, when we do something good, we are rewarded. In the case of security, though, when the user does something “good,” the only existing reinforcement is that bad things are less likely to happen. Such a result is quite abstract and does provides neither an immediate reward nor instant gratification, both of which can be a powerful reinforcer in shaping behavior.

We should also examine the opposite – how our behavior is shaped by negative reinforcement when we do something “wrong.” Normally, when we do something bad in a learning environment, we suffer the consequences. In the case of security, however, negative reinforcement for bad behavior is not immediately evident. It may be delayed by days, weeks, or months, even if it comes at all. (Think of a security breach or a case of identity theft, for instance.) Cause and effect is learned best when the effect is immediate, and the anti-security choice often has no immediate consequences. This makes it hard to foster an understanding of consequences, except in the case of spectacular disasters.

It’s also important to consider that factors such as willpower, motivation, risk perception, cost, and convenience are often more important than the lack of cyber knowledge itself.

Social Engineering

Social engineering attacks, largely orchestrated through phishing messages, remain a persistent threat that allow hackers to circumvent security controls. One can manipulate people into revealing confidential information by exploiting their habits, motives, and cognitive biases. Research on phishing largely focuses on users’ ability to detect structural and physical cues in malicious emails, such as spelling mistakes and differences between displayed URLs and URLs embedded in HTML code. Humans often process email messages quickly by using mental models or heuristics, thus overlooking cues that indicate deception. In addition, people’s habits, needs, and desires make them vulnerable to phishing scams that promise rewards.

Awareness of phishing messages among users has increased, but so has the sophistication of the messages themselves. Hackers design phishing messages today to activate basic human emotions (e.g., fear, greed, and altruism) and often target specific groups to exploit their specific needs. Hackers sometimes even contextualize the messages to individuals by incorporating their personal information (spear phishing). For instance, a new phishing scam has arisen on dating applications; a user (bot) triggers a conversation with another user (victim) and, after a few exchanges, sends a link (malicious) to the victim, ostensibly with a picture, in an attempt to get the victim to click on it. Research shows that such spear phishing attacks are more effective than generic phishing messages, which target a wider population.

The Malevolent Actors

These days, cybercrime is far more organized than ever before. Cyber criminals are well-equipped, well-funded, and they have the tools and knowledge they need to get the job done. But to really understand cyber criminals, we mainly need to know one thing: their motives.

Overwhelmingly, cyber criminals are interested in money. Either they’ll use ransomware to extort money, or they’ll steal data that can be sold on dark web markets. Their main course of action is through phishing campaigns, which can come pre-designed at a low cost and can have a truly staggering return on investment. Typically these campaigns are used to deliver malware (often ransomware), and emails usually include a strong social engineering component. For instance, recipients are often asked to open or forward attachments such as false business documents, which activate malicious software when opened.

Unlike cyber criminals, hacktivists are generally not motivated by money. Instead, they are driven by revenge. Hacktivists work alone, making their attacks extremely difficult to predict or respond to quickly. Sometimes these hacktivists are insider threats, who know how to bypass an organization’s security defenses, but the real risk still lies in that there’s no way of knowing who they are or when they’ll strike, and it is more difficult to attribute a hacktivist’s motives. We at least know that their main course of action is typically through DDoS (distributed denial of service) attacks, primarily to embarrass their victim.

In recent years, we’ve all heard a lot about statesponsored attacks and cyber espionage. Unsurprisingly, state-sponsored attackers usually aren’t interested in our money. Instead, they want our data, and that means gaining sustained (“persistent”) access to our IT infrastructure. If an organization operates in a particularly sensitive market where proprietary data is meticulously safeguarded (i.e critical infrastructure or electoral votes), then they’re at greater risk of gaining the attention of a state-sponsored hacking group. In essence, because so much is online, state-sponsored groups will often work on multiple attack vectors simultaneously. In this way, they can collect sensitive data over a long time period, rather than simply performing a “raid operation.”


Cyber is the battlespace where many interests collide. In the midst of the haze and dust of this collision is the human, who is the recipient of many external inputs, good and bad, that shape the way we react and behave. But even apart from these external inputs, each human’s cognitive and heuristic biases also play an incredibly important role – which we will discuss more in the second section.

Par t II: The Human


In the first part of our paper focusing on the human side of cyber security, we discussed “the enemy” – the external environment that introduces threats to which humans must respond.

In this second section we will focus on the human beliefs and heuristic biases that shape our actions in the cyber landscape. Combining knowledge in behavioral economics, cyberpsychology, social engineering, and the decision sciences, we will answer a fundamental question in cyber security: why is the human the weakest link?

The Human

The factors that make up an organization’s cyber security standing are innumerable, from firewalls and encryption standards to incident reporting protocols and overall security culture. On top of the fact that most organizations are still playing “catch-up” when it comes to robust cyber security tech, their human employees are left largely vulnerable.

Humans are flawed both in retention of information (e.g. from security training) and use of information (e.g. decision-making while using technology), which makes us a particular risk to all modern organizations. This is reflected in the numbers: reports from this past year indicate that well over two-thirds of all cyber security incidents are either directly caused by human error or made possible by human exploitation and manipulation – such as with phishing and spear phishing attacks. Hackers are fully cognizant of the particular vulnerability of us humans, and they are more than happy to exploit it (as discussed in our first section).

Decision Heuristics and Cyberpsychology

Our brains are evolutionarily wired to decrease our decision-making time through decision heuristics – essentially, simple and efficient rules that guide our judgments. While these cognitive shortcuts are for many reasons extremely beneficial, they also leave us prone to heuristic biases, which lead us to make misjudgments and incorrect assumptions when evaluating a set of choices. These are fundamental to understanding how humans behave in the cyber domain.

Separate but related to this point is cyberpsychology, the up-and-coming field of understanding how humans interact with, and are shaped by, technology. Whereas decision science broadly studies human decision-making processes and their applications, cyberpsychology is specific to technology – asking which behaviors change when we’re “in” the cyber world, and which ones don’t. Perhaps unsurprisingly, it’s more often the case that human behavior significantly (arguably, sometimes, even radically) shifts as soon as we sit down in front of a screen. Field expert Dr. Mary Aiken broadly refers to this shift as the cyber effect, which encompasses everything from behavioral amplification to online disinhibition. We’ll come back to this shortly.

Passwords and Logins

Security experts and IT professionals are constantly recommending best password practices – yet the majority of people don’t follow them. This is because password creation and retention poses a significant challenge to most human beings.

First of all, most of us tend to select memorable passwords – be they the names of our family members, an important date in our lives, or the title of our favorite movie. We make these passwords memorable because we need so many of them; it’s nearly impossible for any of us to memorize the passwords to all of our accounts, from email, social media, and streaming services to online shopping, work databases, and mobile banking. This tendency for creating memorable passwords existed under old NIST guidelines (when best password practices meant nonsensical combinations of letters, numbers, and symbols), and it still does today – even as NIST recommends the use of long passphrases. Simply, it’s easier to remember personally-significant information than it is random symbols and words – so we’re more likely to select passwords like 125elmST (i.e. an old address) and johnlucyjacksarah (i.e. children’s names) than something like e7@4j8!9.

There are undoubtedly tradeoffs to be made between password strength, variance, and memorability; having many memorable passwords is a tradeoff against a single but very complex (and unmemorable) one. The problem is, though, memorable passwords are easy for a social engineer to crack. A few minutes perusing Kali Linux or another penetration testing toolkit will quickly reveal the plethora of robust software available for this task, from pre-assembled dictionaries of popular passwords to scripts that scramble information on a target (e.g. name, spouse, hometown, etc.) into custom password dictionaries. Our predictable behavior inherently leaves us – and by extension, our friends, families, employers, and services – vulnerable to being hacked.

When we’re forced to change our passwords, whether because of an operating system requirement or a workplace policy, we tend to overlap new passwords with old ones. If my current password is strongP@ssw0rd!123, I’m likely to make my new password something like strongP@ssw0rd!456 or even strongerP@ssw0rd!123 so I don’t have to memorize a new phrase. Patterns and habits, particularly in cyberspace, are important – hence status quo bias, or our preference for defaults (in other words, leaving things as they are rather than exerting effort to deal with change). Again, though, this predictable behavior leaves humans vulnerable. If a hacker was using an employee’s login under-the-radar, and suddenly notices the password was changed (because they can’t log in anymore), they’re prone to leverage knowledge of this bias and just guess passwords surrounding the old one – e.g. by altering a few numbers or letters (as the above employee just did). The password’s strength is suddenly meaningless.

Detecting Threats: Phishing, Social Engineering, and More

Heuristic biases make us humans incredibly vulnerable – and ineffective – when it comes to detecting cyber threats, which is only compounded by our lack of cyber lexicon and lack of “instincts” in the cyber domain. As previously referenced, this is exactly what attackers target.

Humans want to be trusting, and this is evidenced in the effectiveness of phishing attacks. We might be trained on what phishing emails look like; we might be told to never trust an email sent from an unknown source; we might even be told to not trust suspicious emails sent by our friends. In practice, however, this modum of education largely falls apart. While many phishing emails will pass through our inboxes each year (and data on workplace security breaches confirm this fact), the majority of emails are likely not phishing attacks. Each time we open a semi-suspicious email that’s safe, we are falling (like it or not) to confirmation bias. The preconceived notion that we don’t need to scrutinize every email is reinforced each time we open a suspicious email without negative consequence. Thus, the attention paid to each suspicious email decreases over time – only increasing the likelihood that a phishing attack will succeed.

Representativeness also comes into play here. When we routinely perform a task, our brains naturally categorize small variants on that same task together to reduce decision-making time. If a manager sends out a “weekly recap” email every Friday, for instance, employees won’t look twice at a “weekly recap” email, sent out on Friday, that is in fact not actually from the boss’ email. Our tendency to incorrectly group new circumstances in with previous experiences can be deadly; this is why social engineers actively look for ways to insert malicious URLs, documents, and more alongside existing patterns of cyber behavior.

Adding online disinhibition into the picture illustrates why we’re even more vulnerable to social engineering attacks: pioneering research in cyberpsychology shows that we behave far more recklessly online than in our physical day-to-day interactions; Dr. Aiken even compares some of our daily cyber behavior with drunken intoxication, as we trust others easily and disclose our personal information more quickly. Combining this with hyperpersonal online interaction (reduced social barriers to intimacy and information-sharing) and stranger-on-the-train syndrome (the tendency to share sensitive information with those we feel we won’t see again), we’re already very prone to sharing too much information online – made all the worse by malicious actors who additionally exploit our heuristic biases. The way we behave online, and the way we make decisions about what we click

on and what we divulge, are critical reasons why humans are indeed the “weakest link” in a cyber security posture.

Linked to these ideas is something all of us face: optimism bias, or thinking we navigate the world “better” than others do. In the physical world, this arises all the time. We tend to exempt ourselves from rules, policies, and standards that we hold others to just because we think we’re above them; for instance, many people text and drive, despite hard numbers that show the extreme danger to themselves and others, because they think that they (unlike everyone else) can multitask on the road. (Of course, many studies show “multitasking” in its fullest sense is actually impossible.) This carries directly over to the cyber domain, where we all rank our performance above the “average” and thus let ourselves fall below standards of secure cyber behavior. If another employee causes a data breach because of a weak password, we’ll likely reprimand them for it; however, we likely allow ourselves to save passwords in a browser or circumvent multi-factor authentication without that same reprimanding. Similarly, an employee may be entirely aware of the danger posed by social engineering and yet put no care into preventing phishing attacks, simply because they feel they don’t have a need – and while we may scold them in our head, most of us probably do the same.

Humans are also prone to recency bias, which makes us more concerned with information that’s been presented most recently. If an employee was just trained for three hours on the dangers of malware, for example, they’re much more likely to worry about Internet downloads then they are to scrutinize an email for signs of phishing. This is arguably an obvious point, but the underlying heuristic bias is critical to human cyber vulnerability – because the “latest” threats will implicitly receive greater priority. The ordering of security training, then, suddenly becomes important.

Frequency bias, or the favoring of reinforced information, similarly affects human takeaways from security training. It makes sense that the more an issue is discussed, the more its perceived importance increases – but in security, where there are simply too many topics to all be covered in-depth, this presents an especially complicated problem. If a company spends 5 hours training on a difficult topic like email encryption or password creation, and spends only 3 hours on phishing and social engineering attacks, employees are more likely to prioritize defending against the former (when in fact, social engineering poses a far greater risk). Balancing the time spent covering a cyber security threat with the rate of its appearance is obviously challenging (and will be covered in our next section), but we must recognize this bias in the first place.


Human behavior in cyberspace is incredibly flawed, from the vulnerability of decision heuristics to the bizarre behaviors we adopt simply because of technology’s unfamiliarity. By better understanding the decision science, behavioral economics, and cyberpsychology behind human cyber behavior, we can better attain practical human security. Thus, with “the enemy” and “the human” both analyzed, our third and final section will focus on “winning the battle” – that is, designing cyber security with the human in mind.

Part III: Winning the Battle


The first part in our paper, “The Enemy,” focused on the external environment and the threats it introduces to the cyber landscape, while our second section, “The Human,” discussed the heuristic biases and beliefs that shape human responses to these threats. In the final part of our paper, we are going to discuss how “not [to] fear the result of a hundred battles,” or in other words, how to design security policies around – and for – the human.

Theoretical Background

Several decades ago, child development psychologist Jean Piaget stated that “the principle goal of education is to create men who are capable of doing new things, not simply of repeating what other generations have done – men who are creative, inventive and discoverers.” Building on this, we can view learning as a process of acquiring and building knowledge with strong social and experiential components.

Educational research has identified that people learn more effectively and deeply through engagement, motivation, cooperation and collaboration, and

participation in real experiences; thus, conventional teaching methods cannot meet the learning requirements of today. Building and sharing knowledge proves to be quite advantageous for effective teaching and curriculum development, a far cry from bureaucratic styles of education that value quantity over quality, and look to inherent motivation. Through techniques and methodologies such as open discussion forums and hands-on exercises, people in small groups may develop critical thinking, learn to mobilize toward common goals, and rely on a collective intelligence that’s superior to the sum of each individual.

In accordance with Bloom’s taxonomy, though, teaching activities should not just focus on transmitting information; they should also focus on application. Leveraging old information to solve a new and challenging problem is essential to fostering retention and developing new knowledge. Perhaps predictably, this quickly becomes cyclical – with old knowledge reinforced through application, and application yielding new knowledge to be applied, and so on and so forth. The byproduct of this process is often referred to as deeper learning. Gamification and simulation are just two examples of implementing this deep learning process.

Defaults and “Nudges”

Economists Richard Thaler and Cass Sunstein outlined, in their 2009 book Nudge, the idea of libertarian paternalism – a decision-modeling framework where nobody has their choices altered or limited (the libertarian element), but by framing choices in a certain way, decision creators can help people pick the best option (the paternalism element). In essence, the idea is to nudge individuals in the right direction without restricting their freedoms.

There are many ways to achieve this “nudging,” including reordering options and increasing the amount of available background information, but we want to focus on one method in particular: changing the default.

Defaults are incredibly powerful when it comes to decision-making; the decision science and behavioral economics studies on this are plentiful. Because of status quo bias – essentially, our aversion to putting effort into change – most of us are likely to stick with the default option in any given decision scenario. Nudge specifically shows this to be true with everything from college dining hall buffets to corporate 401K plans. For these reasons, security-by-default is one of the most effective ways to “win the battle” when it comes to practical human security. Making cyber safety the status quo will all but guarantee overall more secure behavior, because most tech users will just stick with that default.

This idea of “defaults” has many implications for how organizations design, execute, and reinforce security training, but that will be addressed in the next section; for now, we’re going to focus on how organizations can institute security-by-default in technology itself.

Implement the strongest possible encryption on all devices you buy for your organization, be they smartphones, laptops, or IoT sensors. Install, and set as the default, encrypted communication software – from Signal with its perfect forward secrecy to PGP-secured email applications. Restrict Internet access (e.g. to work-only websites) and ensure all accounts, by default, have a minimum level of access required to perform basic tasks (e.g. prevent software installations). Enable email filtering, mandate multifactor authentication, and set baseline password requirements. Configure malware removal software, internal and external firewalls, and automatic account “lockdowns” after a certain period of inactivity. Constantly monitor new industry guidelines and cutting-edge research to adapt these security defaults – for instance, what constitutes a strong password. And overall, stop employees from dealing with complicated and undesirable issues of distrust whenever possible; if they’re going to be annoyed when you ask them to double-check their personal USB hasn’t been hacked, then don’t let them plug it in to begin with. When security is the default, more employees and users will almost automatically become more secure.

It’s also important to understand that just because some of these cyber security elements are “defaults” doesn’t mean they should be presented as options in the first place. Largely, they should not. Encryption and multifactor authentication are two perfect examples of a human-side security factor that shouldn’t have an “opt-out.” Just as it shouldn’t be an option for employees to disable or weaken encryption, it shouldn’t be an option to only use single-factor authentication (i.e. just a username and password). When it comes to security, defaults without the libertarian element are often best.

Decision Heuristics, Feedback Loops, and Security Training

As we discussed in the first part of our paper, security training and education are currently inadequate for addressing environmental cyberspace threats, as well as the heuristic and cognitive biases that guide our cyber behavior. Even with security-by-design, as addressed in the last section, we still aren’t protecting for situations in which (a) organizations can’t make security the default, because control inherently lies with the human, and (b) humans change that default, becoming less secure in the process.

The classic (and currently pervasive) solution to this problem is to impose a clear corporate security policy – for instance, that users cannot use portable USB storage devices. On its face this seems effective, as the structure of the organization inherently incentivizes compliance with these policies…right? Wrong. From a human perspective, this is likely to fail for many reasons:

  • Users will disrespect the policy because they don’t understand the risk involved.
  • They may not be aware of the policy, or they may even forget the policy.
  • Environmental situations may arise where the users have to use a removable storage device, so the employees will make functional exceptions (convenience over security).
  • Humans are prone to optimism bias – thinking we’re better at certain behaviors than others (e.g. cyber security) – and will thus exempt themselves from secure behavior even when functionality or convenience isn’t directly part of the equation.
  • If employees violate the policy once and there are no negative consequences, they will likely do so again.

The most reliable way to prevent the risk, then, is to take the user out of the equation. But this is equal to amputating an aching arm. We cannot divorce humans from technology, or technology from humans – so while this works in theory, it’s unacceptable in practice.

So the question remains: what about the underlying issues of risk perception and diffusion of responsibility (from which many other risks arise)? In these cases, it’s necessary to raise user awareness of security issues and actively engage them in the security process, without creating an environment of paranoia. In short: it’s about designing security training and security policies for the human.

Awareness and training programs are important mechanisms for disseminating security information across an organization. They aim at stimulating security behaviors, motivating stakeholders to recognize security concerns, and educating them to respond accordingly. Since security awareness and training are not only driven by requirements internal to the organization, but also by external mandates, they must align with regulatory and contractual compliance drivers as well. Current literature and guidelines such as ENISA and NIST additionally emphasize alignment with business needs, IT architecture, and workplace culture.

Target participants of awareness programs include senior management, technical personnel, employees, and third parties employed by the organization (e.g., contractors, vendors, etc.). Awareness programs are essential because organizations need to ensure that stakeholders understand and comply with security policies and procedures; they also need to follow specific rules for the systems and applications to which they have access. However, as we explained previously, stakeholders’ behavior is influenced by individual traits and biases which impact their compliance with security policies and procedures. Thus, security awareness must be designed to tackle beliefs, attitudes, and biases.

Designers of security systems should consider adopting the systems approach to training, considered an effective education practice in the field of human factors and ergonomics.

Central to this approach is identifying participants’ cultural biases, which can facilitate needs assessments and provide an alternative criterion for grouping program participants. Because individuals’ cultural biases influence their perception and decision-making calculus, they also affect an individual’s risk assessment. This goes unaddressed in most contemporary security training programs, which is immensely problematic for how employees individually frame their knowledge after the session concludes. Without a relevant cultural framing (and this culture can take many dimensions), employees will fail to fully understand why security is so important.

Thankfully, framing cyber security in light of cultural biases can be done without expending significant additional resources. For instance, while convenience is heavily prioritized in technology, there are many cases in which users find a system’s aesthetics to be far more important. It is therefore possible for employees to value security over convenience – it’s just about making them understand why they should in the first place. Understanding where groups of employees are coming from (e.g. does their job value convenience, collaboration, speed, etc.) will help frame security’s relevance in the correct light; we might, for example, find that a litigation team best understands security in the context of risk avoidance, whereas an accounting team best understands security in the context of the confidentiality, integrity, and authenticity of data. Thus, it’s essential to design security policies with cultural biases in mind. In addition to creating and selecting culturally-relevant training materials and simulation exercises, it’s important to back this up with a strong corporate security culture.

Along a similar vein, organizations must build strong feedback loops during and after security training; with weak feedback loops – meaning pro-security choices don’t yield any visible rewards (other than the unspoken “congrats, you didn’t get hacked!”) – employees are not behaviorally conditioned or incentivized for safe and secure cyber behavior. During training, the best source of guidance is past “success stories” in which security controls prevented security incidents, smart behavior blocked social engineering attacks, and clear reporting procedures resulted in the quick trapping and containment of an active breach. Post-training, techniques such as randomly spotlighting employees for smart security practices will further solidify feedback loops that promote cyber-secure behavior. (This specific example of using intermittent rewards is also extremely effective in conditioning.)

with increased reward – even if it’s in a “fake” environment – we can shape smarter behavior in the workplace. If employees experience the value of screening an email during a simulation (e.g. preventing a phishing attack from a foreign competitor), then they’re more likely to scrutinize suspicious messages in real life. This is because self-realization and application, as previously referenced, are incredibly important for knowledge retention and re-application.

Closely linked with strong feedback loops is positive association. Research on cognitive biases has identified that individual judgments are affected by exposure to positive or negative stimuli (e.g. smiling or frowning face), which decision scientists refer to as affect bias – our quick emotional reaction to a given stimulus. Thus, associating security messages with positive images (i.e. happy customers means more profit) is quite effective for ensuring users’ compliance with your security policies. Rewarding strong performance on security tests (whether scheduled or “spontaneous” – e.g. sending employees phishing emails) will also help achieve this end.

Anchoring bias, or our tendency to rely on the first piece of information presented on a topic, also heavily influences attitudes towards new security practices. If employees are told that strong passwords have at least six characters, for example, they’re likely to just use six characters and not opt for any stronger; they won’t deviate from this anchoring information. This has implications from email scrutinization all the way to online browsing behavior. Similarly, us humans are prone to frequency bias, or prioritizing issues about which we have more information, and recency bias, or prioritizing issues about which we’ve been educated most recently. If an employee is trained for five hours on password creation but only for three on phishing attacks, then they will pay greater attention to the former (despite the latter being a greater and more complicated threat).

Since our brains rely heavily on the order and frequency with which information is presented, we need to design security policies for these tendencies. To design for anchoring bias, we should start out a topic by providing the strongest and most effective security practices (e.g. say passwords should be length 12 instead of 6); to design for frequency bias, we it’s imperative to try and balance the time spent on a topic with its importance (e.g. spending the most time on social engineering threats); and to design for recency bias, we should end security training (and security retraining) by covering the most prevalent threats.

Continuing with the order and timing of information: humans tend to attribute greater value to short-term costs and benefits than long-term ones. In other words, security experts should emphasize not only the longterm and macro-level benefits of secure cyber behavior (e.g. better growth) but also the immediate, short-term benefits. We only need turn on the news to see a plethora of examples for this emphasis, from avoiding massive monetary loss to preventing a legal and PR nightmare. Instant costs will resonate effectively with us humans.

We already discussed positively reinforcing secure behavior, but it’s also (obviously) critical to punish violations of security policies. Having a corporate security policy that is not monitored or enforced is tantamount to having laws but no police. Organizations must monitor employee behavior – in addition to the behavior of those doing the monitoring – and act when rules are broken. This connects back to strong feedback loops and the idea of humans favoring the immediate effects of our actions: the best deterrent to breaking the rules is not the severity of consequences but the likelihood of being caught.

A final consideration to take into account is how to reduce the human cost of implementing security. This encompasses many of the ideas in our paper, from security-by-default on the technology side to effective designing of security training, framing of cyber security issues, and conditioning of secure cyber behavior on the human side.

A final consideration to take into account is how to reduce the human cost of implementing security. This encompasses many of the ideas in our paper, from security-by-default on the technology side to effective designing of security training, framing of cyber security issues, and conditioning of secure cyber behavior on the human side.

Unfortunately, the aforementioned practices alone are not enough to totally win the battle; despite the title of our piece, presuming to be “victorious” in the truest sense of the word would be delusional. Security awareness must be a nationwide strategic goal. It requires a holistic approach, from governments, policymakers, and tech leaders to citizens, consumers, and students. Security awareness programs must be carefully designed to run through the backbone of our society and should become an integral part of our educational system. Curricula should not focus only

on programming or technical literacy but also on cyber security literacy; we need to build a cyber lexicon and a common framework to understand cyber behavior. There’s still much to be done.


Peggy Ertmer argues that changing one’s attitude is a hard thing to do but can be achieved through practice, cultural support, and challenging beliefs through community. There’s a long path to follow until we reach a safer cyber environment, much like the path of Areti (Virtue) in the labors of Hercules: narrow and full of difficulties in the beginning, but wide like an avenue at the end. In the military they say that if you want peace, you have to prepare for war.

Considering our paper and its ideas in their entirety, this is exactly what we have to do. If we want to change the security culture of our society, we need, as Dr. Mary Aiken says, to stop, disconnect, and reflect. We need to remember the human.


Part I: T he Enemy Aiken, M. (2016). The Cyber Effect: A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online. London, UK: John Murray. West, R. (2008). The Psychology of Security. Communications of the ACM 51(4), 34-40. Goel, S., Williams, K., & Dincelli, E. (2017). Got Phished? Internet Security and Human Vulnerability. Journal Of The Association For Information Systems, 18(1), 22-44. Recorded Future (2016, Aug. 23). Proactive Defense: Understanding the 4 Main Threat Actor Types. Retrieved from https://www.recordedfuture.com/ threat-actor-types/.

Part II: The Human Boulton, C. (2017, Apr. 19). Humans Are (Still) the Weakest Cybersecurity Link. Retrieved from https:// www.cio.com/article/3191088/security/humans-arestill-the-weakest-cybersecurity-link.html. Aiken, M. (2016). The Cyber Effect: A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online. London, UK: John Murray. Thaler, R. & Sunstein, C. (2009). Nudge: Improving Decisions About Health, Wealth, and Happiness. NY, NY: Penguin Books.

Part III: Winning the Battle Aiken, M. (2016). The Cyber Effect: A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online. London, UK: John Murray. García-Va lcárcel, A., Basilotta, V., & López, C. (2014). ICT in Collaborative Learning in the Classrooms of Primary and Secondary Education. Media Education Research Journal 42(21), 65-74. West, R. (2008). The Psychology of Security. Communications of the ACM 51(4), 34-40. Tsohou, A ., Karyda, M., & Kokolakis, S. (2014). Analyzing the Role of Cognitive and Cultural Biases in the Internalization of Information Security Policies: Recommendations for Information Security Awareness Programs. Computers & Security, 52, 128-141. National Research Council. (2000). How People Learn: Brain, Mind, Experience, and School: Expanded Edition. Washington, D.C.: The National Academies Press. Thaler, R. & Sunstein, C. (2009). Nudge: Improving Decisions About Health, Wealth, and Happiness. NY, NY: Penguin Books.


Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years worth of experience in managing IT projects and evaluating cybersecurity. During his service in the Armed Forces, he was assigned to various key positions in national, NATO and EU headquarters. Anastasios has been honoured by numerous high ranking officers for his expertise and professionalism and he was nominated as a certified NATO evaluator for information security.

He holds certifications in information security, cybersecurity, teaching computing and GDPR from organizations like NATO and Open University. Anastasios is also a certified Informatics Instructor for lifelong training.

Anastasios’ interests include exploring the human side of cybersecurity – the psychology of security, public education, organizational training programs, and the effect of biases (cultural, heuristic and cognitive) in applying cybersecurity policies and integrating technology into learning.

He is intrigued by new challenges, open-minded and flexible. Currently, he works as an informatics instructor at AKMI Educational Institute.


Other Magazines