It’s been more than a decade since security researchers in Belarus first identified a virus that would come to be known as Stuxnet, a sophisticated cyber weapon used in a multi-campaign attack targeting a uranium enrichment facility in Natanz, Iran. Now, fresh infrastructure attacks in the volatile region are renewing the discussion about Stuxnet, its origins, its methods, and its contributions to the current compendium of ICS defenses.
Last month, Iranian authorities revealed a catastrophic explosion and power outage at the same Natanz facility once again aimed at disrupting nuclear production processes there. Israeli media reported that Mossad, the Israeli spy agency, was involved in most recent attack, part of that country’s ongoing shadow war with Iran. The latest salvo (while likely not cyber-related) drew immediate parallels to Stuxnet, with Iranian officials saying their nation had once again been subject to “an act of nuclear terrorism” and calling on the international community to confront the threat.
What did Stuxnet do?
First unleashed in 2009, the Stuxnet virus had multiple components including an aggressive malware tuned to find and corrupt processes run by Siemens STEP7-based PLCs. Its objective was to stealthily manipulate the speed of the sensitive enrichment centrifuges — causing attrition rather than blatant physical destruction. The Stuxnet worm reportedly infected more than 200,000 machines in 14 Iranian facilities and may have ruined up to 10% of the 9,000 centrifuges in Natanz.
A second Stuxnet variant released several months after the first contained multiple Windows zero-day vulnerabilities, used stolen certificates, and exploited known simulation functionality in the Siemens PLCs. The more aggressive Stuxnet variation found its way into non-Iranian environments, but, thankfully, did not result in much damage.
From a historical perspective, the Stuxnet worm signaled that well-equipped, nation-state-sponsored actors possessed advanced capabilities that would set the stage for more serious cyber-physical attacks such as those in Ukraine, Estonia, and Saudi Arabia.
In the real world, advanced nation-state attacks are rare compared to common, opportunistic disruptions caused by things like ransomware. But Stuxnet demonstrates the importance of a well-engineered environment complete with adequate ICS cybersecurity. Such an environment requires a thorough understanding of asset inventory and security posture, Windows system hardening, network segmentation and monitoring, isolated process monitoring, adequate process instrumentation, supply-chain and third-party risk management, properly trained operators, and decent operational security (OPSEC).
How Stuxnet works: The air gap myth
Back in 2010, Iran’s Natanz nuclear facility, like many others before and since, relied on the concept of non-connected and isolated networks as a form of cyber security. Proponents of this approach — dubbed an air gap because it implies physical space between the organization’s networked assets and the outside world — believe it provides sufficient protection for facilities that don’t require Internet access or ubiquitous IT/enterprise services.
They’re wrong.
Relying on air gaps as a single form of defense remains but one in a list of unfortunate fallacies used to justify a lackadaisical approach to ICS security. Others include oft-debunked beliefs like:
Attackers lack sufficient knowledge and incentive to target ICS and SCADA systems.
Cyber security is important mostly for IT and enterprise systems.
Proven security strategies don’t apply to the majority of operational technology systems because the risk of disruption is too high in OT.
Events such as those at Natanz demonstrate that once an ICS perimeter, even an air-gapped one, is breached (cue Maginot line), attackers enjoy nearly free rein within such soft environments.
While not much is publicly known about how Stuxnet and its variants made their way into the facilities at Natanz, it’s widely speculated that the malware entered through infected removable media such as a USB stick, via a laptop used by a contractor, an outside vendor, or concealed in an infected file like a corrupt .pdf version of a technical manual.
These well-understood attack vectors are a known risk to almost any facility and, in themselves, are not overly sophisticated. Transient assets such as technicians’ laptops, third parties coming onsite, infected installers, and auto-play exploits on removable media are hardly novel. The salient point in the Stuxnet case is that a determined actor managed to infiltrate a purportedly secure facility, delivering malware that ultimately found its designated target.
How did Stuxnet spread?
Stuxnet came in two waves. Less is known about the first wave, which was more of a slow burn and less noisy, making it less likely to be discovered. The second wave was the one that made international headlines with its more demonstrative and decidedly less surgical approach.
This second Stuxnet variant likely did not propagate from an initial infection on a susceptible PLC or controller, but rather gained access to one commodity Windows system through the use of zero-day exploits. From that one infected commodity Windows host, the malware moved laterally from one Windows box to another across the unsegmented network.
In particular, the second Stuxnet wave:
Noisily pivoted through the environment via Windows Remote Procedure Calls (RPC), Server Message Block (SMB), and MS SQL protocols.
Leveraged a Windows Shortcut (.lnk) zero-day vulnerability that thwarted the disabling USB and removable media auto-play.
Used stolen code signing certificates to make its malicious payloads appear as legitimate drivers that were recognized by the operating system and ignored by anti-virus and policy enforcement controls.
Leveraged vulnerabilities in Windows print spooling network services (usually on by default) hosts where infection was not possible through RPC/SMB/SQL and USB insertion. This included privilege escalation.
Possessed the ability to leverage the OPC protocol to traverse segments.
Infected Siemens STEP7 project files, replacing legitimate STEP7 DLLs with modified malicious ones, and using hard-coded credentials to log in to the Siemens WinCC SCADA database to identify specific targets.
Rendered itself dormant if appropriate criteria were not met.
Through the successful compromise of the facility, the malware found its targets presumably by identifying matching project files, identifiers, strings, or some other criteria. Once it had done so, it inserted malicious STEP7 logic into the PLC to quietly increase failure rates of the operating centrifuges. The malware also contained command and control capabilities designed to provide updates manage exfiltrated data, though using such noisy functionality almost certainly would have tipped off the victims to the attack in progress.
How to prevent Stuxnet
It’s unlikely Stuxnet could have been entirely averted given the skill and motivation of the parties responsible. Let’s face it, if you attract the ire of highly skilled and well-funded nation-state attackers, not much can be done to avoid compromise. Still, there are some lessons to be learned from the Stuxnet scenario. ICS/OT defenders today can glean prescriptive insights on what didn’t work including:
Reliance on an air gap as a stand-alone security measure was an absolute failure.
Traditional anti-virus would not have found this type of malware.
Updated host OS may have helped with some of the exploits, but again, unlikely given the attackers’ skill level.
So, what would have helped?
Application whitelisting and host integrity checking probably would have detected the replaced STEP7 DLLs and altered project files.
Strict removable media policies and enforcement (potentially even hot glue) could have prevented an initial infection or, at least, made it much harder.
Sufficient host hardening to include disabling unnecessary services like the Windows printer spooling service would have made lateral movement more difficult.
Sufficient network segmentation might have stopped the attackers from pivoting across the environment while better monitoring might have alerted defenders to anomalous traffic.
Diligent application of security policy could have isolated and contained the malware as it beaconed across network zones and layers where it did not belong.
Most importantly, better-trained resources and appropriate out-of-band (OOB) monitoring for anomalies within the centrifuge halls could have contained the damage early on in the attack.
The fear with the release of a capability like Stuxnet is that other nations will be emboldened to try similar ICS attacks. So far, there have been few real assaults targeting ICS — most have been collateral to enterprise/IT systems being compromised and would have benefited by adequate application of cybersecurity basics and well-thought-out engineering principles.
Time will tell if Stuxnet remains an outlier — an example of an exotic ICS attack limited to the rarified air of nation-states in conflict. In the meantime, the lessons Stuxnet continues to demonstrate about fundamental ICS security controls remain relevant to defenders of every size and stripe.