In this Aug. 16, 2016 file photo, a worker is silhouetted against a computer display showing a live visualization of the online phishing and fraudulent phone calls across China during the 4th China Internet Security Conference (ISC) in Beijing. Chinese electronics maker Hangzhou Xiongmai Technology has issued a recall on Monday, Oct. 24, 2016, for millions of products sold in the U.S. following a devastating cyberattack, but has lashed out at critics who say its devices were at fault.
Ng Han Guan
Click for a full size image
The human element of cybersecurity
By Ben Frederick
If humans aren’t included in the design process of new technologies and we don’t train analysts to work together, why are we still blaming humans for our cybersecurity woes?
If you’re inclined to think of cybersecurity as lending itself to clean, elegant, better-than-human, extremely secure solutions, you probably don’t work in the field.
But one bias held by many in information security is that much of the mess is because humans — not hackers, shoddy software or poorly-built devices — are the source of the vast majority of our digital vulnerabilities. Why extend the time and energy to hack into a heavily-guarded system, security experts might opine, if you can simply trick a user into clicking a link laden with malware?
If businesses didn’t have to deal with the “end user” (that is, you and I), this reasoning goes, all our problems would be solved.
This represents a quiet bias against users in nearly every conversation about cybersecurity. Unfortunately, this bias means that humans have become an afterthought in the design of our technology. Unraveling this is part of what makes cybersecurity a “wicked problem,” or a problem that resists resolution and can’t be solved without a multi-disciplinary approach.
Just because a problem resists resolution doesn’t mean that it can’t be broken down into smaller parts in order to make progress on the whole.
That’s where the work of Nancy Cooke, a professor of human systems engineering at Arizona State University, comes in. By remembering people in the design of our technology and by training cybersecurity analysts to work together, we can put people back into our digital security tools and make the world safer.
The ghost in the machine
As we develop new technology, we often begin with a novel technological approach instead of a focus on who is supposed to be using our new tool.
“It’s a mess of technology that’s out there with good intentions, but doesn’t work well with humans,” says Dr. Cooke.
Cooke advocates that developers include users in the planning stages as they design and build new security technologies.
Her colleague, Jamie Winterton, director of strategy at ASU’s Global Strategy Initiative, concurs.
At first, engineers think “‘This would be a really great technological solution to this problem’ and then rush forward and build it. Then we say, ‘Now, how do we make it secure?’ But it’s a lot harder to secure something after you’ve already built it than if you start to think about security and the way that real people are going to use the technology in the design process,” says Ms. Winterton.
The human element that does seep into our current development process? Implicit biases. Machine-learning algorithms, for example, can reflect the biases of the engineers who write them or biases within the training data fed to them.
“We like to romanticize this idea of machines being pure and perfect, but we are in the machines because we made them,” says Winterton.
Lone wolves need packs
In defending against digital threats, too, thinking about how to make technology and humans work well together would serve an immense good. Many cyber analysts work alone and are not incentivized to work in teams, says Cooke. This can be a weakness because lone wolves can’t do nearly as well in addressing complex problems as a team with varied backgrounds, according to Cooke’s research.
And even when there are teams, like in the capture-the-flag competitions which have become popular with public and private cybersecurity recruiters, Cooke says that the work that students and competitors do in these competitions is closer to individual work than it is to teamwork.
The distinction being that sitting close to coworkers does not mean communication between members, while a team will approach a problem more deliberately, assigning tasks based on individual strengths and clearly defining roles in order to collaborate more effectively.
“It’s a much tighter kind of collaboration,” says Cooke. “You can put people together and you will get a group, but it doesn’t necessarily make them a good team.”
Some of the “lone wolf” mentality in cybersecurity is cultural: the mythos of the individual who can hack through difficult challenges alone is the stuff of hacker lore. Some of it is self-selection: the kinds of people that enjoy long hours coding tend to be less team-oriented, Cooke said.
Cybersecurity itself is a thorny, knotted issue, but both Cooke and Winterton say including the end user in the design process from the beginning is one important way to tighten or eradicate flaws that appear when a device or software is put into the hands of a user. Other solutions include incentivizing collaboration and better training for analysts.
“Broad groups of stakeholders should be engaged to make sure systems are secure, fair, and as useful as they can be,” says Winterton.
Incorporating the human element will reduce the risk inherent in cybersecurity systems and build a stronger framework for the development of new technology in the future.
source: The Christian Science Monitor