A blog that informs about security topics in the small intersection between IT and OT.

In the annals of the Cold War history the events of September 26, 1983, play a significant part. Lieutenant Colonel Stanislav Petrov was on duty at the command center of the Soviet Union’s early-warning anti-ballistic missile system. The system, known as Oko, reported the launch of one intercontinental ballistic missile from the United States, followed by four more.

The political climate at the time was tense. The Soviet Union had deployed fourteen SS-20/RSD-10 theatre nuclear missiles, leading to the NATO Double-Track Decision in December 1979. This decision involved the deployment of 108 Pershing II nuclear missiles in Western Europe, capable of hitting targets in eastern Ukraine, Belarus, or Lithuania within 10 minutes. The United States also began psychological operations designed to test Soviet radar vulnerability and demonstrate US nuclear capabilities.

These operations included clandestine naval operations in the Barents, Norwegian, Black, and Baltic seas, as well as flights by American bombers directly toward Soviet airspace. These actions significantly strained relations between the United States and the Soviet Union.

On the night of the incident, Petrov was faced with a decision that could potentially start a full-scale nuclear war. The system was indicating the highest level of reliability for the alert. Despite this, Petrov had doubts. He was aware that every second of hesitation took away valuable time that the Soviet Union’s military and political leadership needed to be informed without delay. Yet, he found himself unable to move, feeling like he was sitting on a hot frying pan.

Why the False Positive is bad

When an incident occurs a million and one question pops up in your head and one will most certainly be is this for real, or is it a false positive? And if so, why is that so bad? Consider the following:

  1. Resource Drain. If your security work is anything like mine jumping on a false positive event would stretch your resources to the maximum. Responding to false positives will be inefficient and increase your operational costs.
  2. Desensitization. If your system like the boy in Aesop’s old fables tricks the villagers by crying wolf all the time your team might stop responding or the real threat will be missed cause you are bogged down with false positives.
  3. Disruption. If the event triggers defensive actions that might lead to unnecessary downtime of normal operations.
  4. Loss of Confidence. Repeated false positives can erode confidence in the security system, leading to reduced effectiveness and potential vulnerabilities.
Concluding that false positives are detrimental to your security posture there must be things you can do about it. There is plenty and I list and discuss some of them below. But first I would like to stress one thing. Setting up and optimizing your security systems and response capabilities takes time and resources, sometimes a lot of both and it never ends. There is no box you can buy that will forever keep you secure.

What to do?

After making that point let us look at a couple of things you can do to optimize your systems to avoid some of the false positives without starting to miss things.
  1. Tuning Security Systems: Adjust the sensitivity of the intrusion detection systems to reduce the number of alerts, which can help decrease the number of false positives.
  2. Whitelisting: Identify safe entities such as IP addresses, URLs, or applications known to be secure and add them to a whitelist so they won’t trigger alerts.
  3. Correlation Analysis: Use advanced analytics to correlate events and filter out noise. This can help distinguish between real threats and false positives.
  4. Threat Intelligence Feeds: Use threat intelligence feeds to gain information about the latest threats. This can help in distinguishing between false positives and actual threats.
  5. Regular Updates and Patches: Keep all systems, especially security systems, up-to-date with the latest patches and updates. This can help in reducing false positives triggered by outdated security definitions.
  6. Staff Training: Train staff to better understand the security systems and the nature of the threats. This can help them make better decisions when alerts are triggered.

Obviously, you can go into a lot more detail on each of these points and you can probably find a lot more but this is to get you thinking about your system and posture.

The main point here is that even though systems and AI is contributing and helping the human is not replaced yet or maybe never will be, you can’t only rely on what the system tells you.

This is your starting point of creating a system that gives you true positives that you should act on. But what happened back in September 1983? If it had led to nuclear war we would have heard about it.

The world did not end in 1983

Despite the system indicating the highest level of reliability for the alert, Petrov chose to trust his instincts over the machines. He made the decision to regard the alerts as false alarms, a decision that went against his instructions and could have had severe consequences.
Petrov’s decision to wait for additional confirmation of the attack, which ultimately never came, was a significant deviation from protocol. This decision, which only came to light years later, may have averted a full-scale nuclear war.
This incident serves as a stark reminder of the razor-thin margins on which the fate of the world hung during the height of the Cold War. It underscores the critical role of human judgment, even in an era increasingly dominated by machines.
So strong was Stanislav’s impact that when the Economist published an obituary they named him “The man who saved the world”. Even though our day-to-day decisions probably do not have that world-saving impact they could still be important and since they are increasingly reliant on system support it is vital that we before there is a real event actively work with the system trimming it to be the best possible.

Leave a comment