Skip to content

On the one hand, companies are aware that cyberattacks are becoming more and more sophisticated and so are continuously investing not only in security infrastructure, but also in strengthening their internal security culture. Measures include employee training and a proactive risk management strategy. Training courses provide detailed information on the latest methods and strategies used by hackers, with the aim of promoting technological understanding and raising awareness and vigilance regarding cybersecurity. Participants receive the knowledge they need to effectively recognize and mitigate potential security risks.

But why do new tricks keep emerging? And why do users fall for them again and again? How is it possible that cyberattacks are still effective, despite all the measures that are in place?
 

Why is there an increase in cybercrime?

One of the meanest but most effective hacker strategies is to play with the psychological limitations of system users. Cyberattacks are unfortunately no longer just a threat caused by technical vulnerabilities, but a strategic analysis and exploitation of human behavior and cognitive bias. It is much more effective to exploit human behavioral patterns and mental processes as a potentially weak link in the cybersecurity chain than to find vulnerabilities in the technology.

Exploiting instinctive reactions: Psychological strategies in cybercrime

Really good hackers know: Instinctive reactions to threatening situations are not based on logic, but on automatic or semi-automatic reactions. This is a legacy of our evolutionary past, in which quick reactions were often life-saving. Attackers exploit precisely these primitive reaction patterns and rely on psychological tricks and deceptive maneuvers to deceive users and bypass security systems. But how do they do it?

Why users behave negligently in cyberspace

When a security incident occurs and human error is identified as the cause, we need to dig deeper: Why did the person in question behave negligently? To answer this question adequately, we first need to understand human motivation. In particular, it is important to understand what motivates people to behave cautiously in cyberspace – or not.

Psychological control processes in response to cyberthreats

Basically, there are two different psychological processes that can occur in response to perceived threats: the danger control process and the fear control process. These processes influence how people react to cyberthreats.

The danger control process

The danger control process is activated when users consider a threat to be high – for example, phishing emails that steal passwords – and they are convinced that they know exactly what to do to avert this threat. In this case, the person’s reactions are usually adaptive. This means that users are motivated to control and minimize the danger. In the phishing email scenario, users will be motivated to follow all the security advice for email traffic in accordance with standard practice.

The fear control process

The fear control process occurs when a person perceives a threat as being high, but believes that there are no effective measures to deal with the threat. Users' reactions are then aimed more at managing their own fear or insecurity rather than the actual threat itself. People then react with avoidance, denial, or exaggerated concern instead of taking effective action. Measures are perceived as ineffective if compliance with cybersecurity rules is too complex, or if several, sometimes contradictory rules have to be taken into account at the same time.

In the context of cybersecurity, this means that training courses need to focus on developing effective intervention strategies aimed at promoting adaptive reactions and reducing maladaptive reactions on the part of the users. But unfortunately, the same applies here: People are not machines and their behavior online (as in real life) is ambivalent, complex, and sometimes unpredictable.

The danger of overestimating oneself in cyberspace

Research findings unfortunately also show that people who feel well-informed increasingly underestimate potential threats. They become negligent because they feel invulnerable. In other words: The more informed someone is, the less motivated they are to take preventive measures.

Sadly, this is how our brains work: Most human errors occur due to mental automation whenever a person feels confident in an activity or behavior, as the brain switches to automatic mode. This can even happen with complex tasks such as driving, where intensive conscious thinking is initially required but is replaced by automation after repeated practice. These mistakes are often not recognized as such because the individual is simply following a deeply ingrained pattern of behavior. One example would be automatically replying to “typical-looking” emails, while not recognizing that these are in fact phishing emails.

Consequences for cybersecurity design and training

The consequence of this must be to recognize human errors and limitations when designing cybersecurity systems and to react proactively to these, instead of expecting users to always make the right decisions in cyberspace. Traditional cybersecurity training must also learn to take greater account of people’s psychological limitations.

Furthermore, training courses must make users aware that hackers are not only getting better at understanding system technologies, but at understanding human psychology in particular. In plain language: Users need to be educated about their own cognitive patterns and the limits and weaknesses of their own thinking and behavior so they can keep a watchful eye out for new tricks. Educated users understand psychological tactics and social engineering strategies and know how these can be used to potentially exploit security vulnerabilities. In particular, they understand how they need to adapt their behavior and take control of their own actions to effectively minimize cyberattacks.