Current public analysis of information operations focuses on “hack and leak” operations and ‘inauthentic’ behavior on social media. This level of analysis ignores the pernicious strategic information operations that run across longer time spans. Case in point: We came across one of Dan Nowak’s 2013 keynotes where he touched on IceFog, a Chinese APT.
Amongst other targets, IceFog leveraged software implants inside news organizations to shape narratives and influence markets. Today we would call these acts information operations, a term which wasn’t used in the mainstream in 2013, even if it was well-understood in intelligence circles. This older campaign was already more sophisticated than the straightforward cyber-facilitated “hack and leak” operations that the public, and therefore our policy makers, still focus on.
This does not bode well for our defensive, offensive or analytical capability moving forward. Moreover, just about any offensive team can conduct “hack and leak” operations, but the enduring art of IO reflects subtler hand.
To quote one of the great thinkers in the space:
“..for propaganda is not the touch of the magic wand. It is based on slow, constant impregnation.”
Hacking vs IO: Mature IO campaigns can affect change without targets realizing their perceptions are being re-sculpted into another image. In simpler terms, a technical hack is akin to waving a magic wand and a technical effect occurs. In the case of IceFog, the effects promogulated by the hack are the long term manipulations of media narratives delivered via implant. These are the slow and constant impregnations that are disseminated over time. This is no longer the domain of the #CyberCyber; this is a much more challenging issue.
For something that’s a little more tangible, let’s look at vehicular hacking. There’s the direct offensive cyber way of doing it; like Charlie Miller and Chris Valasek’s epic hack. While dynamic and exciting, this approach tends to leave forensic traces somewhere down the road.
Then there’s another approach of targeting autonomous vehicles; via their sensor systems. Here is an example of research that focuses on manipulating the environment in such a way as to influence a vehicle into doing something the attacker wants. PDF here. There are no direct commands and there is no active presence on the target system.
Does the reader recall Gibson’s Zero History and “Ugliest t-shirt in the world?” A shirt so ugly “that digital cameras forget they’ve seen it.”? In this case, the ugliest t-shirt was effectively blacklisted by the image recognition systems tied to a global surveillance apparatus. The end result was that the surveillance systems did work as intended..but were intentionally blinded by their training sets. These techniques fall into an ephemeral threat space, something akin to the long term poisoning of machine learning data sets, possibly even infiltrating communities to reshape discourse.
These types of attacks do not fit in with how we as an industry (or society) think about cyber, social engineering or electronic warfare. There is no implant, there is no phishing email, and there are no RF-based jammers or injectors. This is the manipulation of AI & machine learning backends; where the end result is the reshaping of the kinetic world.
We are entering a time where our 2nd Gen warfare cyber TTPs are going to fall disastrously short of our 5th Gen adversaries. For a brief diatribe on the topic, watch this section of the same 2013 briefing.
Articulating influence is a perpetually moving target. No amount of regulation and hand wringing will fundamentally defend against these types of offensive operations. Giving up our society to match our adversaries would be a true win for them, and a strategic loss for us. In IO, offense always wins while deterrence and ‘losing elegantly’ are key for defense.
Written by: Roel & Dan