Alerts appear in your news feed.
Upgrade Windows urgently...
and iPhone iOS...
and Google Chrome...
But what's the back story of these alerts?
And why are they always so urgent?
Even with the best testing in the world - there will be faults and security vulnerabilities. That's why we have responsible disclosure... a controlled process that provides vendors time to fix and respond to issues and rewards security researchers for their effort in finding the vulnerability.
In recent years, HackerOne championed responsible disclosure to respond to the endless bickering of...
"We contacted the vendor... We were ignored... So here's how to exploit their software..."
"Why didn't you tell us, so we could fix it? It's irresponsible to..."
Thankfully most security problems are now solved via responsible disclosure.
In essence, the outcome of the process is two-fold:
- a fix that every customer can apply
- an open-source proof of concept of how to exploit out-of-date software.
But what happens to these proof-of-concept exploits?
They get included in open-source penetration-testing frameworks like Metasploit.
The pipeline of:
find vulnerability > responsible disclosure > public disclosure > open-source/Metasploit exploit
has been criticised for making it too easy for hackers to find weaknesses and exploit unsuspecting businesses.
A claim quickly refuted with counterarguments of:
- A skilled attacker can do this regardless of what we disclose
- We are merely making the vendor and the public aware of the issue
- The vendor has time to implement a fix
- The public has time to update the software
- The Metasploit proof-of-concept exploit is available to test for compliance & uncover previously unknown gaps
These arguments are undoubtedly coherent and reasoned. But the act of publishing the proof-of-concept exploit changes the game.
The attacker no longer needs to be as skilled - because the attack has become commoditised. So almost anyone can run it even if they don't know how it works.
Disclosing a proof-of-concept exploit transforms the game from elite state-sponsored hackers going after state-sponsored targets to any cybercrime-interested person going after any target.
That said - there's one mitigating factor.
If you aren't running an on-premise Microsoft Exchange server... you don't need to worry about a Microsoft Exchange exploit.
But what if it is something more fundamental than an application? What if it's a protocol - a way for every system on the internet to communicate?
With greater impact comes greater responsibility.
In 2008, security researcher Dan Kaminsky discovered a critical vulnerability in the DNS (Domain Name System) protocol. The flaw, known as DNS cache poisoning, allowed an attacker to redirect internet traffic and potentially launch widespread attacks.
Recognising the severity of the issue, Kaminsky coordinated a global, multi-vendor response. This unprecedented effort included vendors like Microsoft, Cisco, and Sun Microsystems, who worked collectively to release patches on the same day - a rare event known as a "coordinated disclosure."
Responsible disclosure works - if everyone patches their applications and operating systems. Typically 30 days or more are left between a patch being provided to vulnerable customers and the proof-of-concept exploit being disclosed.
In theory, the risk is fully managed, and vulnerabilities can't be exploited. Unfortunately - in practice - all too often, security patches are delayed or not applied, leaving applications vulnerable to known exploits.
This leads to the frustrating position that the majority of ransomware attacks in 2022 exploited old bugs with known fixes. Something that is entirely avoidable and incurs unnecessary recovery costs.