Springe zum Hauptinhalt

Mega-Menü-Produkt-Services_Pfeil

HYPERSECURE PlatformZero Trust Strategy

 

COMPLIANCE

Mega-Menü-Blog_Pfeil

News, Information AND Tips ABOUT IT SecurityTo the Blog
Support
Service Desk Partner Portal

 

Mega-Menü-Blog_Pfeil

News, Information and Tips about IT Security
To the BlogNewsletter

5 min read

Microsoft Exchange hack - when the patch came, it was already too late

Microsoft Exchange hack - when the patch came, it was already too late

Among a high-profile wave of cyberattacks in March 2021, tens of thousands of email servers worldwide fell victim due to a vulnerability in the Microsoft Exchange Server. Through a so-called zero-day exploit, the vulnerabilities were targeted by a previously unknown Chinese espionage group called "Hafnium.” As a result, national authorities warned thousands of companies to quickly close the gap in their own Exchange servers as Microsoft released patches to fix the vulnerabilities in Exchange servers shortly after.

In this paper, we will clarify the temporal process: What happened? Could the attacks have been prevented?

Even those who had patched still were not secure.

We researched the following using a DriveLock test system. The following chronological sequence could be observed with certainty in many companies:

  • On the night of 03/03/2021 - one day before the patch was released by Microsoft - the test server was attacked by a zero-day exploit. A “Webshell" was left behind in the process. This is a very simple script that executes everything locally that is passed through it in terms of its parameters.
  • On the same day, the patch was published by Microsoft.
  • By 05/03, the attacked system was patched. So, one might assume that it would be safe from future attacks. However, at the time, it had not yet been published how an attack could actually be recognised and the failure to realise the patch does not remove the Webshell that was previously injected.
  • A little later on 07/03, the Webshell was then exploited to the inventory of the system. Although many publications describe what dangerous files companies need to look for, the contents of the files have not been explained. It is clear now: File’s indicative of an attack would have been system inventory information.
    This system inventory is stored on the attacked system. Although the system was already patched at that point, this still didn't prevent unauthorised access because the Webshell was still there.
  • Additionally, on 07/03, another backdoor - besides the first backdoor of the Webshell - was left on the server with the motto “more is better.” It was not concluded whether the same hacker group was responsible for this.
  • It was not until 09/03 that the two legacies, Webshell and second backdoor, were included in the updated virus scanner signatures and thus, could be found and removed

The patch did not protect against accessibility as it did not remove the installed backdoors.

This would have been easy, since it is only a single file (Webshell) that is located in a directory of the Exchange server.

No accusations towards the server operators and server administrators

In the daily press, people could read that the administrators of the affected systems only had to apply the available patches in time to prevent the attack. That may be true for other attacks, but not in this situation:
The vulnerability was exploited before a patch was available from the vendor. Even those who applied the patch immediately after its release have already been victims of the attack. Besides, the assumption that all IP addresses on the World Wide Web were systematically scanned, every Exchange server was also potentially attacked.

But even hackers only have a “limited toolset” at their disposal.

If you look at the course of action from the hackers' point of view, the whole attack was really cleverly organised in terms of timing: The criminals scanned the entire Internet once (about 4.3 billion IP addresses) and left the aforementioned Webshell on each server. At first, this sounds like a lot of effort, but if you rent say, 255 virtual machines from one of the big cloud providers, that leaves only about 17 million IP addresses to scan per machine. If you want to "work through" the entire Internet in three days, you will have to manage around 66 IP addresses per second. That's pretty easy to do with the cheapest virtual machine that only costs around 400 €.

 

IP ad­dresses across the globe

Number of virtual machines

IP ad­dresses to be scanned per machine

IP ad­dresses to be scanned per Second (in 3 days)

4335000000

255

17000000

66

 

In any case, the scan must be completed before proper notice is taken of the wave of attacks. Because then, panic breaks out among all administrators and the "holes" (backdoors) are quickly plugged. That's why the hackers drilled another "hole" besides the actual zero-day exploit, the Webshell. So there were three "holes" in total:

1. Zero-Day-Exploit
2. Webshell (first backdoor)
3. Second backdoor

When distributing the Webshell, the hacker group created a database with information under which IP addresses the Exchange server could reach.
This makes the next step much easier as every system has interesting data. For example; the European Parliament, and is therefore relevant.
In order to distinguish between interesting and irrelevant systems, even more information is needed. This must be collected automatically because there are too many servers, therefore, the criminals generated a system inventory using Windows on-board tools (via the left-behind Webshell) and collected this information. As the attack wave had been discovered by now, the attackers had to finish the inventory immediately after the scan was completed while half the administrator world was just distributing patches in panic. With this, they still have to find the interesting systems with the collected inventory and attack them "properly" before the second hole (Webshell) is discovered. The actual attacks could have been to fetch memory dumps of authentication databases (LSASS) which contains passwords.

And now, we will address the topic of “limited toolset”:
The chronological sequence was adequate, but the Webshell could've done a lot more. Because this is not always the same, it is easily detectable by a virus scanner (also known as blacklisting). Although these could be “personalized,” it can make it very difficult to automatically detect them through blacklisting.
The criminal attacker could also delete data they automatically generated and downloaded at a later point in time using the Webshell - which would make it very difficult to detect if an attack had occurred. Summary: The attack could have been conducted better and with virtually no detection.

 

Here, application white listing would've been the best solution once again.

Similar with many attacks in the past: : If the servers had been running a whitelisting product (such as DriveLock Application Control), the Exchange server would have only executed scripts that were provided by Microsoft. Although the exploitation on the Exchange Server, would have still existed. the Webshell file could have only been deposited but not executed since it was not on the Whitelist. This would have prevented any kind of access if an execution had been attempted.

If you look further at what the attackers did with interesting systems afterwards, for example memory dumps from the user database of the operating system (created with a Microsoft tool) or creation of an archive with data (with the help of a compression program), then that could have also been very easily prevented with Application Control. Because these tools were first applied to the computers and then executed, they would not have been on the Whitelist.

 

Looking into the future

The next security gap will surely emerge. Let us assume that this time someone carries out the attack and applies a little more effort: He personalizes the Webshell so the antivirus software is unable to detect it, giving the attacker more time. Thus, the attackers would then delete any evidence (files) after downloading the data they were after. Additionally, this time they will also delete the log files as they are the only indicators of an attack. As a result, it would no longer be possible to find out what exactly was downloaded and, above all, what additional backdoors might have been created. And maybe these vulnerabilities won't be exploited for another year. Then, the affected company would have to reinstall its entire IT infrastructure after the attack to make sure they did not miss any security gaps, leading to excessive costs and expenditures.

Therefore, Application Control with Application Whitelisting could have prevented all of this. Still, many people out there still avoid Application Whitelisting and assume it is associated with a high administrative effort. Predictive Whitelisting ensures that the Whitelist continuously learns autonomously. This reduces the manual maintenance effort.

The fundamental question after this attack is: Is the effort for Application Whitelisting really higher than the work to find out what happened in connection with this hack disaster?

 

Log4j Hack – the Patch Came Too Late

Log4j Hack – the Patch Came Too Late

"In a sensational wave of attacks, tens of thousands of servers worldwide fell victim to cyberattacks in December 2021 due to a security...

Read More
4 Essential Strategies for IT Security

4 Essential Strategies for IT Security

The Australian Cyber Security Centre (ACSC) is an Australian Government intelligence and security agency who provides advice and assistance on...

Read More
Unseen Invaders: Exploring the World of Computer Worms

Unseen Invaders: Exploring the World of Computer Worms

In our increasingly digital landscape, the importance of understanding and defending against computer worms cannot be overstated. Whether you're a...

Read More