Essays and articles about cyber risk often refer to human error as a major driver of cyber incidents and the financial losses arising from them – up to 95% of all breaches1. But what is human error and how are human traits and behaviours exploited by malevolent actors?
By better understanding these vulnerabilities, more can be done to address them and build greater cyber resilience.
This paper explores the human factor in relation to cyber vulnerabilities, considering a range of scenarios in which end users and infrastructure are exploited by hackers and fraudsters. It considers the different types of human vulnerability and how it might be reduced through better awareness and training and a more robust IT infrastructure.
Against a backdrop of global pandemic, we also consider the additional exposures businesses face when their staff are working remotely. In times of crisis, employees may be forced to fall back on dated, unpatched devices and software and are potentially more vulnerable to scams that prey on their natural fears.
Rather than blaming employees for making mistakes, the report argues it is time to take a new approach to addressing the human side of cyber risk, by identifying and addressing the underlying root cause: human behaviour. It is an approach which recognises all the nuances of what is primarily a sentient threat with recommendations and solutions, but without stigmatising the user as the culprit.
We replace the term ‘human error’ with ‘human factor’ in an attempt to move away from negative connotations while also offering more scope for organisations to identify opportunities in addition to threats.
Organisations are investing heavily in protecting their IT systems with security controls and other infrastructure with spending set to surpass $42 billion in 20202. But is the corporate world investing as much as it needs to in ensuring that its people are the first line of defence when it comes to protecting their data and systems?
The human factor has less to do with actual error and more to do with inadequate security cultures and the exploitation of human behaviour and goodwill. By better understanding the way in which people operate in the workplace, as well as how malicious actors set out to exploit classic human traits, it is possible to identify and address areas of human fallibility.
Senior and middle management are more likely to be exploited by cyber criminals in a targeted attack than lower ranking employees3. This is because they tend to hold valuable information and/or have a high degree of access to such data. C-level executives are nine times more likely to be the target of social engineering than they were in years gone by4.
Social engineering is the art of tricking, seducing or scaring/blackmailing an individual into giving away personal or corporate information or taking action, such as authorising a payment. Email remains the main attack vector for social engineering, ranging from malicious spam through to business email compromise (BEC), or imposter attacks, that can cost organisations millions of dollars. Such methods are popular because they require minimal hacking expertise and attacks have a high success rate.
Perpetrators of BEC often target individuals responsible for sending payments, including company CFOs. Through social engineering, they use psychological manipulation to encourage users into providing information or making a financial transaction. There are different approaches to social engineering, ranging from simple lures that are designed to spark curiosity, such as sending a fake invoice to an accounts payable team, through to more elaborate schemes.
Attackers may use stolen branding to create landing pages or domains that appear – at first glance – to be legitimate. These techniques are used in BEC attacks, where threat actors attempt to convince their victims that they are communicating with a trusted entity and pretending to be a CEO, colleague, business partner or even support staff.
According to the 2019 AIG EMEA Cyber Claims Intelligence Report, BEC has overtaken ransomware and data breach by hackers as the dominant driver of cyber insurance claims. Nearly a quarter of incidents reported to AIG in 2018 were due to losses arising from BEC, with anecdotal evidence that even very senior members of organisations had fallen victim to these scams. Globally, businesses have lost an estimated $26 billion to BEC over the last three years5.
As Sebastian Hess, Cyber Risk Advisor for AIG Europe, explains, one reason for this is the use of social engineering, which creates emails that appear legitimate, even to the well-trained eye.
“Social engineering continues to be one of the top threats, resulting in security incidents caused by targeting employees and senior management,” he says. “New trends involve email scams utilising social engineering techniques to harvest key data or money. These attacks are becoming more sophisticated and targeted with attackers stepping up their game and finding ways to counter protections.”
“Social engineering continues to be one of the top threats, resulting in security incidents caused by targeting employees and senior management”
Sebastian Hess, Cyber Risk Advisor, AIG Europe
Around the world, businesses are faced with the growing reality of home working amid quarantine and social distancing restrictions. As they enact emergency business continuity plans, they and their staff are highly exposed to cyber threats. Employees doing their best in a difficult situation are often forced to fall back on personal devices that are less secure at a time when increased stress levels potentially make them easier to exploit.
Increased home working inevitably heightens employees’ risk to exploitation by malicious actors. While presently these risks have come to the fore as governments impose quarantine restrictions, such exposures are also a part of a wider shift as businesses embrace digitisation and new ways of working.
Among the business challenges associated with home working are concerns over data security and ensuring workers have access to all the technology they need in order to work productively. In normal times, businesses can carefully consider their technology requirements when introducing or expanding flexible working practices and setting ‘bring your own device’ (BYOD) policies. This is not as easily done during an emergency.
Increased home working inevitably heightens employees’ risk to exploitation by malicious actors.
When staff are suddenly forced to work remotely, they may need to rely on home computers and other personal devices. As demand for PCs grows amid disruption in the supply chain, there may be a shortfall in availability of laptops and other devices for organisations to provide for their staff.
Personal devices are unlikely to have the same protections as those in the workplace, or the same capacity to monitor activity. Firms therefore need to ensure they maintain high standards of IT security and help their employees ensure personal devices are secure.
Additionally, businesses are introducing new capabilities to allow remote connectivity, e.g. Remote Desktop Protocol (RDP), to enable their staff to work from home. But were security considerations part of this service introduction? Is the access appropriately secured?
Employers and their staff should be alert to these scams as well as all the usual pitfalls of day-to-day cyber security, ensuring staff continue to use two-factor authentication and strong passwords, for instance. Staff should be sent regular reminders of what to be alert to from a cyber security perspective, and what to do if they inadvertently click on a link in a phishing email or suspect they have been targeted by a cyber attack.
Mobile device users are more susceptible to phishing, social media attacks and spoofing – which attempts to mimic legitimate web pages – than desktop users, according to Verizon. It attributes this to the design of mobile devices and how users interact with them.
Relatively limited screen sizes restrict what can be accessed and viewed clearly and apps often restrict the availability of information, making it more difficult to check the veracity of emails and requests. Mobile devices are also often used when individuals are walking, talking and doing other activities that may distract them from being more vigilant.
The risks associated with mobile devices, including tablets and smart phones, is clearly more of an issue for organisations at times when large numbers of staff are working remotely.
Numerous softer factors, such as the natural desire to be helpful, come into play when it comes to end user risks. In many ways, cyber crime is an evolution of more traditional crimes, such as extortion, blackmail and fraud. Although the scale of the activity has grown, the same sort of techniques are used to defraud and manipulate.
From a psychological perspective, engaging in compassionate actions activates the parts of the brain associated with the reward system, with positive feelings then reinforcing altruistic behaviours. It is not just about people making mistakes, as is implied by the label ‘human error’. While large and sophisticated companies spend significant resources on developing technological barriers to cyber threats, are they doing enough from a human behavioural perspective?
Training and awareness is growing. Employees are less likely to fall for phishing attacks today than they did seven years ago, down to three percent from 24% according to Verizon research6. However, people are still clicking on links they should not, particularly as phishing emails become more convincing.
To a certain extent, human behaviours – such as clicking on links – becomes automated. If users are ‘rewarded’ by getting what they expect when they carry out such actions, such as gaining access to a webpage, these repeat behaviours become even more of a challenge to break because they are done almost unconsciously. There are many behavioural reasons why users may not comply with cyber security best practice (see information box).
In psychology, procedural memory describes a type of implicit memory which stores information on how to perform certain functions, such as walking, talking and the acquisition of motor skills and habits. Through procedural memory, users can become habituated to the ‘I accept’ button and, for instance, GDPR cookie warnings on websites, where it becomes second nature to click on a button.
To a certain extent, human behaviours – such as clicking on links – becomes automated. If users are ‘rewarded’ by getting what they expect when they carry out such actions.
Poor password hygiene is an issue that has not gone away, although best practice approaches have evolved. Simple passwords are typically the weakest link in otherwise secure networks.
Attackers often use password-cracking tools to circumvent an encrypted password and gain access to a user’s account. This is easier to do if the password is simple, such as ‘password’, ‘qwerty’ or ‘1234567’. These brute force methods were behind the 2014 iCloud breach where attackers gained access to a number of celebrities’ personal pictures.
Whereas the advice was once to change passwords regularly, today it is deemed more useful to have a strong password and to stick with it. The National Institute of Standards and Frameworks’ (NIST) recommends a minimum password length of eight characters while also encouraging very lengthy passwords of easy-to-remember words or a passphrase. It advises against using too much password complexity.
In 2017 when its guidelines were last modified, NIST also recommended removing resets requiring users to change their passwords every few months or so. Password strength should be about quality, not quantity, it argued.
(source: UK Government Office for Science )
The idea behind social engineering is to exploit the potential victim’s natural tendencies and emotional reactions, such as an employee’s desire to help a colleague. A criminal might pose as a co-worker in order to gain trust while manufacturing pressurised situations – such as critical deadlines – so that individuals ‘act first and think later’.
Types of social engineering attack include baiting, phishing, email hacking and spamming and vishing (the same scam as phishing but carried out over the phone). Sextortion scams, where victims are told compromising footage or images will be released to friends and family unless they pay (or give access to information), are also rooted in social engineering.
Social engineering occurs both online and offline, with fraudsters making use of all manner of information – including data harvested through social media accounts – to make themselves appear genuine.
Antivirus software provider Norton7 recommends the following tips to avoid falling victim to social engineering:
Understanding the different types of cyber attackers and their characteristics can help organisations prepare their workforce to better identify potential threats. From nation-state actors to socio-political ‘hacktivists’, different attackers are motivated in different ways. However, the fallout can be much broader than their actual target, as was the case in the NotPetya attack.
Cybercriminals are generally motivated by monetary gain and seek the easiest methods of extracting what they want. They tend to be part of structured organisations, are highly motivated, highly skilled and have unlimited time and resources. They love playing mind games8. It is easier for them to pick up the phone and impersonate an official or technical support to get the information they need to breach corporate networks, than it is to spend months trying to hack in using brute force.
The cyber kill chain is a series of steps typically taken by cybercriminals as they move from the early reconnaissance stages to the exfiltration of data. The human factor is a critical element in disrupting the kill chain as organisations seek to combat ransomware, security breaches and other attacks.
"A lack of knowledge is almost never the fault of the user – but should be addressed by the organisation"
Usecure blogger Micke Ahola
Addressing the cyber human factor is an essential part of an organisation’s overall approach to putting in place a robust cybersecurity framework. Success comes by taking small everyday actions, including:
Employers need to consider the psychological levers that cybercriminals pull when they use human behaviours to trick employees into clicking on links and giving away passwords. Examples include goodwill, slips and lapses, being ‘in a rush’, being too trustful and the principle of reciprocity9. By understanding how these traits are exploited, it is possible to forewarn staff and implement security protocols – such as two-factor or multi-factor authentication – to override these tendencies.
In an ongoing battle against cyber threats, organisations need to let go of the term ‘human error’ and instead look at how they can empower their workforce to be more secure. As Usecure blogger Micke Ahola explains, ‘a lack of knowledge is almost never the fault of the user – but should be addressed by the organisation’10. The onus is therefore on employers to ensure end-users have the knowledge and skills they need to keep themselves and their businesses secure.
Employees will make better security choices if they receive regular training in the form of workshops, meetings, guest speakers, crossfunctional teams, tests, internal resources and weekly updates.
Gamification is one strategy employers are using to better engage staff in cyber security training. This is where executives participate in ‘wargaming’ simulations as a way of improving awareness and developing good habits in a fun way that promotes learning.
Another tactic is to put staff through their paces by subjecting them to regular ethical hacking attacks. Just as fire drills ensure everyone remembers the correct evacuation procedure in the case of a fire, simulated phishing attacks can test staff’s ability to spot suspicious emails and keep them safe in the virtual environment. Allowing employees to experience a (simulated) phishing scam has been shown to increase risk perception, encouraging them to apply a higher degree of suspicion to future emails.
Behavioural science is increasingly being used to better understand the human factor in relation to cyber risk. The MINDSPACE framework has been successfully used in a number of settings including financial, healthy eating and sustainability, to help design successful behaviour change interventions using public policy. By using such an approach, there are ways in which users can be subconsciously influenced to make the right choices, actively interrupting the user to ensure that insecure sites are more noticeable for instance.
Good security hygiene is also an essential part of the toolkit. Training users to play their role in securing critical information assets is one aspect of the approach organisations need to take. Another is protecting more vulnerable users. This can be achieved through a strong security program which emphasises firewalls, anti-virus software, patching, strong passwords and other basic security solutions.
Security policies and procedures are a big part of a resilient defence. Multi-factor authentication, ‘buddy systems’ and managing user privileges make it less likely that mistakes will be made, can minimise insider threats and limit overall fallout if and when accounts are compromised.
Multi-factor authentication best practice typically includes the following:
The UK’s National Cyber Security Centre warns against giving users unnecessary data access rights and recommends that granting of highly elevated system privileges should be carefully controlled and managed11.
“Highly privileged administrative accounts should not be used for high risk or day to day user activities, for example web browsing and email,” it says. “Monitor user activity, particularly access to sensitive information... respond where activities are outside of normal, expected bounds.”
There are roles for AI and machine learning, with nearly a quarter of organisations already using security technologies that “augment or replace human intervention in the identification and containment of cyber exploits or breaches”, according to research by Ponemon Institute. It found that high automation organisations are better able to prevent security incidents and disruption to IT and business processes.
In order to deliver a robust cyber security program, it is important for different departments to collaborate in order to protect staff from cyber threats. Cyber security may traditionally have been seen as a job for IT departments and CISOs, but as threats change it has become a strategic, company-wide challenge involving senior management, HR and risk and insurance professionals among others.
“We are seeing cybercriminals moving towards softer targets and carrying out attacks that take advantage of human behaviour and how the brain works”
Mark Camillo, Head of Cyber, EMEA, AIG
“We are seeing cybercriminals moving towards softer targets and carrying out attacks that take advantage of human behaviour and how the brain works,” says Mark Camillo, Head of Cyber, EMEA at AIG. “Organised criminal gangs are using tactics that allow them to make the most money easily and right now that is business email compromise. Clearly, a more holistic approach is needed to understand how fraudsters are stealing and diverting company funds in this way.”
Managing internal threats, such as training staff to identify phishing emails, to improve password hygiene and protect the network against unsecured devices takes up time and resource. By putting users at the heart of their cyber security strategy and gaining buy-in and participation from the various lines of business, cyber security departments can free up time to focus on managing external threats and take a more strategic approach to their organisation’s overall cyber security.
There is a recognition that the IT systems of even the most sophisticated organisations can be breached. Too often, it is not their technology that fails them, but the frameworks and systems (or lack thereof) that have been put in place to protect their people.
Humans naturally want to work at speed – to please their line manager and co-workers and to get things done. This leads to a tendency to overlook security processes, particularly those measures that appear to go against productivity, workplace satisfaction and convenience. However, a balance needs to be maintained if companies are to succeed in protecting their staff.
It is time to move beyond the simple catch-all phrase of ‘human error’ and for organisations to understand and address the critical vulnerabilities faced by end users. This means reducing the opportunity, improving company culture and empowering employees with better knowledge and training.
As the cyber threat landscape grows and evolves, the most resilient organisations will be those that tackle the threat on both a technological and behavioural level, working collaboratively across organisations, with buy-in at every level. The tone of any organisation’s cyber security culture should be set at the top with Boards taking an active role in how they are addressing the cyber human factor.
For more information on how Boards need to manage their cyber risk please see Cyber Risk Oversight 2020: Key Priorities and Practical Guidance for Corporate Boards from the ISA (Internet Security Alliance) and Ecoda (organisation representing the main national institutes of directors in Europe).