Dangers of a False Narrative

The information security field has faced something of an image problem for a long while, in that the difference between the reality and perception by the majority is an ever-widening gulf. This is a serious issue, because it leads to a false idea of what InfoSec is, what it does, and what it can and cannot do.

To begin with, popular culture strongly influences public perception and ideas. This much is obvious from the success of marketing campaigns, memes, and a host of other readily available data. In terms of InfoSec, the popular culture still presents the field as some variation of the 1995 film Hackers – flying fireballs of data and all. Other common tropes include: mashing on keyboards = hacking, password cracking = manual input, etc. Then, there’s always the atrocious 2001 Swordfish – with the standard trope: “it can’t be done! – you have 60 seconds – oh, OK then… hacked!” that makes hackers into something of a cross between ninjas, geeks, and James Bond. While laughable, the problems engendered by this widespread misconception are less than funny. Following the 2007 Live Free or Die Hard, for example, major news outlets had “expert” panels regarding whether such a cyber-attack on the US was possible. As a society, we don’t generally have televised expert panels to discuss the possibility of an Autobot attack, so there is something to be said about the state of the general education surrounding InfoSec.

The false perception matters because it also influences the perceptions of a variety of lawmakers. When those tasked with making decisions that regulate the field and protect our country are dangerously misinformed about reality, bad things happen. Case in point, majority of Americans agree with torture for information-gathering, and cite popular culture sources like 24. More worrisome is that the former Supreme Court Justice Antonin Scalia also cited 24 as possible justification of using torture. Once ideas make their way into the public consciousness, they are nearly impossible to dislodge. Thus, even in the wake of 2014 Senate Intelligence Committee Report on Torture, which found torture to not be an effective or reliable way of extracting information (and in the cases they cited as effective, basic research demonstrated them to be false), many elected officials remain convinced of torture’s efficacy. If the government’s own torture report fails to connect with law-makers, on what grounds do we believe that an NSA report on information security will? We can begin to see some of the dangers we’re headed for, given the divergence of public perception and realities of information security.

Common misconceptions become used as an apparently solid basis for further arguments. For example, Brian Orend’s 2013 The Morality of War (2nd ed), dedicates half a chapter to cyber warfare. In that section, Orend confidently asserts:

Experts in the field talk repeatedly of “the attribution problem,” noting how cyber-attacks – especially those suspected to be linked, in some way, with China – go out of their way to hide their tracks and conceal the ultimate source of the strike… I would want to point out, as mentioned above, that that eventually – and rather quickly, actually – the cyber-community seems to have been able to come up with pretty reliable attributions thus far. Is cyber-strike attribution really so different from, and so much more difficult than, say, the investigations which went into determining who was responsible for the 9/11 attacks (i.e. al-Qaeda), and how the then-government of Afghanistan was complicit in them as well?[i]

Orend uses that premise to argue that cyber-attack reprisals against states are allowed (of the kind that cause serious damage to the state and its population),[ii] and that a case can also be made for physical military retaliation.[iii] Let’s say that, one more time: on the assumption that attribution problems are not a real thing, crippling a state or firing tomahawk missiles and putting boots on the ground in a foreign nation can be a legitimate response to a government getting hacked. Orend is not some quack, pitching conspiracy theories on Reddit or 4Chan. He’s one of the most highly regarded academics in the field of war ethics, has authored half a dozen books, countless articles, is invited to speak to influential crowds, and his work is likely to influence public policy. The public misconception is not a laughing matter, not when it risks starting actual, physical wars.

With the general misconception of what information security is, comes the misconception of what information security does. Here, the general ignorance of the population can be (and often is) exploited, to sell magic-boxes promising to solve all the problems – following a presentation that make sure the audience believes the cyber apocalypse is just around the corner. These magic-boxes are security services – whether access to security databases or some other “gadget” – which promise that the mere possession of the product makes us safe from the entire array of attacks etc. The snake oil pitch is not a new or unique one, and generally does little more than prey on the gullibility and ignorance of the audience. This is not, in and of itself, a grave danger. It only exposes some subsection of the population to an additional threat of exploitation – but does not actually cause any direct harm (in fact, it provides some small measure of protection, though nothing like the promised level of the magic-box). The real threat of the magic-box thinking emerges once we consider the long-term implications of the approach. Here, we’ll only consider two most obvious ones.

First, by believing in the magic-box, we come to rely on its purported protection; i.e. believing ourselves to be secure, we have finished with our security concerns. After all, that’s what the magic-box is for. However, the magic-box does not actually offer the kind of protection it claims, leaving us vulnerable to a whole host of threats, which we no longer have an incentive to seek protection from.

Second, magic box reliance leads to a failure of investment in people; their education, development, employment, and pay. Besides simply perceiving the magic-box as superior – very few people will make the kinds of claims about themselves that are made of magic-boxes – there is also the problem of cost. Magic-box costs start at some $100,000. Decent security costs start at $100,000 – per team member. The financial preference is obvious. Given that it is only people that can protect us, and that it takes the best and brightest to do so successfully, magic-box thinking disincentivizes human investments and increases risk exposure. In the long-term, this leads to a situation where we’re no longer capable of producing the next generation of InfoSec elites, but instead have to import them from abroad. This strategy is losing viability with the rapid rise of places like Russia, China, and India – which provided the foreign source of employees – but are now increasingly able to retain a larger portion of their expert population.

To summarize, the false perception of the InfoSec field creates a false narrative that feeds into the popular culture, all the way up to the law-makers and company executives. This misconception lays the groundwork for very serious long-term problems. The three noted issues were: A) building arguments from misconceptions, unto war; B) failure to recognize and act against threats because of magic-box reliance; and C) failure to invest in InfoSec infrastructure and development through human investments – on the belief that the more affordable magic-box will save the day. Unless the dangers, and their gravity, are recognized and addressed, we run an increasing risk of serious problems down the road.


[i] Orend, Brian. The Morality of War. 2nd ed. New York: Broadview Press, 2013. Pg. 174.

[ii] Ibid. Pg. 176-7.

[iii] Ibid.