This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

EnemyBot Malware Targets Web Servers, CMS Tools and Android OS

 

Description

A rapidly evolving IoT malware dubbed “EnemyBot” is targeting content management systems (CMS), web servers and Android devices. Threat actor group “Keksec” is believed behind the distribution of the malware, according to researchers.

“Services such as VMware Workspace ONE, Adobe ColdFusion, WordPress, PHP Scriptcase and more are being targeted as well as IoT and Android devices,” reported AT&T Alien labs in a recent post. “The malware is rapidly adopting one-day vulnerabilities as part of its exploitation capabilities,” they added.

EnemyBot Working

The Alien lab research team study found four main sections of the malware.

The first section is a python script ‘cc7.py’, used to download all dependencies and compile the malware into different OS architectures (x86, ARM, macOS, OpenBSD, PowerPC, MIPS). After compilation, a batch file “update.sh” is created and used to spread the malware to vulnerable targets.

The second section is the main botnet source code, which includes all the other functionality of the malware excluding the main part and incorporates source codes of the various botnets that can combine to perform an attack.

The third module is obfuscation segment “hide.c” and is compiled and executed manually to encode /decode the malware strings. A simple swap table is used to hide strings and “each char is replaced with a corresponding char in the table” according to researchers.

The last segment includes a command-and-control (CC) component to receive vital actions and payloads from attackers.

AT&T researcher’s further analysis revealed a new scanner function to hunt vulnerable IP addresses and an “adb_infect” function that is used to attack Android devices.

ADB or Android Debug Bridge is a command-line tool that allows you to communicate with a device.

“In case an Android device is connected through USB, or Android emulator running on the machine, EnemyBot will try to infect it by executing shell command,” said the researcher.

“Keksec’s EnemyBot appears to be just starting to spread, however due to the authors’ rapid updates, this botnet has the potential to become a major threat for IoT devices and web servers,” the researchers added.

This Linux-based botnet EnemyBot was first discovered by Securonix in March 2022, and later in-depth analysis was done by Fortinet.

Vulnerabilities Currently Exploited by EnemyBot

The AT&T researchers release a list of vulnerabilities that are currently exploited by the Enemybot, some of them are not assigned a CVE yet.

The list includes Log4shell vulnerability (CVE-2021-44228, CVE-2021-45046), F5 BIG IP devices (CVE-2022-1388), and others. Some of the vulnerabilities were not assigned a CVE yet such as PHP Scriptcase and Adobe ColdFusion 11.

“This indicates that the Keksec group is well resourced and that the group has developed the malware to take advantage of vulnerabilities before they are patched, thus increasing the speed and scale at which it can spread,” the researcher explained.

Recommended Actions

The Alien lab researcher suggests methods to protect from the exploitation. Users are advised to use a properly configured firewall and focus on reducing Linux server and IOT devices’ exposure to the internet.

Another action recommended is to monitor the network traffic, scan the outbound ports and look for the suspicious bandwidth usage. Software should be updated automatically and patched with the latest security update.

EnemyBot Linux Botnet Now Exploits Web Server, Android and CMS Vulnerabilities

 

Description

A nascent Linux-based botnet named Enemybot has expanded its capabilities to include recently disclosed security vulnerabilities in its arsenal to target web servers, Android devices, and content management systems (CMS).

“The malware is rapidly adopting one-day vulnerabilities as part of its exploitation capabilities,” AT&T Alien Labs said in a technical write-up published last week. “Services such as VMware Workspace ONE, Adobe ColdFusion, WordPress, PHP Scriptcase and more are being targeted as well as IoT and Android devices.”

First disclosed by Securonix in March and later by Fortinet, Enemybot has been linked to a threat actor tracked as Keksec (aka Kek Security, Necro, and FreakOut), with early attacks targeting routers from Seowon Intech, D-Link, and iRZ.

Enemybot, which is capable of carrying out DDoS attacks, draws its origins from several other botnets like Mirai, Qbot, Zbot, Gafgyt, and LolFMe. An analysis of the latest variant reveals that it’s made up of four different components -

  • A Python module to download dependencies and compile the malware for different OS architectures
  • The core botnet section
  • An obfuscation segment designed to encode and decode the malware’s strings, and
  • A command-and-control functionality to receive attack commands and fetch additional payloads

“In case an Android device is connected through USB, or Android emulator running on the machine, EnemyBot will try to infect it by executing [a] shell command,” the researchers said, pointing to a new “adb_infect” function. ADB refers to Android Debug Bridge, a command-line utility used to communicate with an Android device.

Also incorporated is a new scanner function that’s engineered to search random IP addresses associated with public-facing assets for potential vulnerabilities, while also taking into account new bugs within days of them being publicly disclosed.

Besides the Log4Shell vulnerabilities that came to light in December 2021, this includes recently patched flaws in Razer Sila routers (no CVE), VMware Workspace ONE Access (CVE-2022-22954), and F5 BIG-IP (CVE-2022-1388) as well as weaknesses in WordPress plugins like Video Synchro PDF.

Other weaponized security shortcomings are below -

  • CVE-2022-22947 (CVSS score: 10.0) - A code injection vulnerability in Spring Cloud Gateway
  • CVE-2021-4039 (CVSS score: 9.8) - A command injection vulnerability in the web interface of the Zyxel NWA-1100-NH firmware
  • CVE-2022-25075 (CVSS score: 9.8) - A command injection vulnerability in TOTOLink A3000RU wireless router
  • CVE-2021-36356 (CVSS score: 9.8) - A remote code execution vulnerability in KRAMER VIAware
  • CVE-2021-35064 (CVSS score: 9.8) - A privilege escalation and command execution vulnerability in Kramer VIAWare
  • CVE-2020-7961 (CVSS score: 9.8) - A remote code execution vulnerability in Liferay Portal

What’s more, the botnet’s source code has been shared on GitHub, making it widely available to other threat actors. “I assume no responsibility for any damages caused by this program,” the project’s README file reads. “This is posted under Apache license and is also considered art.”

"Keksec’s Enemybot appears to be just starting to spread, however due to the authors’ rapid updates, this botnet has the potential to become a major threat for IoT devices and web servers,’’ the researchers said.

“This indicates that the Keksec group is well resourced and that the group has developed the malware to take advantage of vulnerabilities before they are patched, thus increasing the speed and scale at which it can spread.”

How we use Dependabot to secure GitHub

 

Description

At GitHub, we draw on our own experience using GitHub to build GitHub. As an example of this, we use a number of GitHub Advanced Security features internally. This post covers how we rolled out Dependabot internally to keep GitHub’s dependencies up to date. The rollout was managed by our Product Security Engineering Team who work with engineers throughout the entire development lifecycle to ensure that they’re confident in shipping new features and products. We’ll cover some background about how GitHub’s internal security teams think about security tooling in general, how we approached the Dependabot rollout, and how we integrated Dependabot into our existing engineering and security processes.

Keeping our dependencies up to date is one of the easiest ways to keep GitHub’s systems secure. The issue of supply chain security has become increasingly obvious in the past number of years, from the malicious flatmap-stream package, to the most recent log4shell vulnerabilities. Dependabot will alert developers when a repository is using a software dependency with a known vulnerability. By rolling out Dependabot internally to all of our repositories, we can measure, and significantly reduce, our usage of software dependencies with known vulnerabilities.

How we approach new tools and processes

Within Product Security Engineering, we spend a lot of time thinking about how new security tools and processes may impact the day-to-day work of our engineers. We use a number of guiding principles when evaluating tools and designing a rollout plan. For example, Does the security benefit of this new process outweigh the impact on engineering teams? How do we roll this out incrementally and gather feedback? What are our expectations for engineers, and how do we clearly communicate these expectations?

For Dependabot in particular, some of these questions were easy to answer. Dependabot is a native feature of GitHub, meaning, that it integrates with our engineers’ current workflows on GitHub.com. By better tracking the security of our software supply chain, we will keep GitHub and our users secure, which outweighs any potential impact on engineering teams.

We used a three-stage process to roll out Dependabot at GitHub: measure the current state of Dependabot alerts, a staged rollout to enable Dependabot incrementally over the organization, and finally, focus on remediating repositories with open Dependabot alerts.

Measurement

Our first aim was to accurately measure the current state of dependencies internally. We were not yet concerned with the state of any particular repository, but wanted to understand the general risk across the company. We did this by building internal tools to gather statistics about Dependabot alerts across the whole organization via the public GraphQL API. Getting this tooling in place early allowed us to gather metrics continuously and understand the general trends within GitHub before, during, and after the rollout.

Dependabot, like other GitHub Advanced Security features, can be enabled for all repositories within an organization from the organization’s administration page. However, GitHub has several thousand repositories internally, and we were aware that enabling Dependabot organization-wide could have a large impact on teams. We mitigated this impact in two ways: a staged rollout and frequent company-wide communications.

Rollout

A staged rollout allowed us to gather feedback from an initial set of repository owners before proceeding with the organization-wide rollout. We use this approach internally for security tools within GitHub, as we believe that unshipping a new tool or process can cause even more confusion across the company. For Dependabot, we decided on enabling the feature initially on a subset of our most active repositories to ensure that we could gather useful feedback. We then expanded it to a larger subset, before finally enabling the feature organization wide.

As a heavily-distributed company working asynchronously across multiple timezones, we used a mixture of GitHub Issues and GitHub Discussions to share new tools and processes with engineers. We aimed to answer, clearly and succinctly, the most important questions in our communications: What are we doing? Why are we doing this? When are we doing this? Lastly, what do I need to do? The last question was key. We made it clear that we were rolling out Dependabot organization-wide to understand our current risk and that, while we encourage repository owners to upgrade dependencies, we were not expecting every Dependabot alert to be fixed right away.

We also used these discussions as a touchpoint for other related tasks, such as encouraging teams to archive repositories if they are no longer in use. In organizations, there are always early adopters of new tools and features. Although we clearly laid out our incremental rollout plan, we also encouraged teams to enable Dependabot right away if it made sense for their repositories.

All in all, we made the initial ship to 200 repositories, followed up in 30 days with another 1,000 repositories, and enabled it organization-wide at the 45-day point from our initial ship. After enabling it organization-wide, we also used the "Automatically enabled for new repositories" feature to ensure that new repositories are following best practices by default.

Remediation

Once we had Dependabot enabled for all GitHub repositories, we could measure the general trend of Dependabot alerts across the company. Using our tooling, we could see that the general trend of Dependabot alerts across the company was broadly flat. We now switched our focus from measuring the current state to working with repository owners to upgrade our dependencies.

We managed this process through GitHub’s internal Service Catalog tool. This is the single source of truth within GitHub for services running inside GitHub and defines where a service is deployed, who owns the service, and how to contact them. The s_ervice_ concept is an abstraction over repositories. The Service Catalog only tracks repositories that are currently deployed inside GitHub, which is a small subset of repositories. By leveraging the Service Catalog, we could ensure that we focused our remediation efforts on repositories that are running in production, where a vulnerable dependency could present a risk to GitHub and our users.

Each service can have domain-specific metrics associated with them, and we built tooling to continuously pull Dependabot data via the GitHub REST API and upload it to the Service Catalog:

The Service Catalog allows us to assign service level objectives (SLOs) to individual metrics. We acknowledge that not all Dependabot alerts can be actioned immediately. Instead, we assign a realistic grace period for service owners to remediate Dependabot alerts before marking a metric as failing.

At this point, the Service Catalog metrics showed that around one-third of services had Dependabot alerts that needed remediation. We then needed a process for prioritizing and managing the work of upgrading dependencies across the company. We decided to integrate with GitHub’s internal Engineering Fundamentals program. This takes a company-wide view of the various metrics in the Service Catalog that we consider the baseline for well-maintained, available, and secure services.

The program is all about prioritization: given the current set of services not meeting baseline expectations, what is the priority for service owners right now? By integrating Dependabot alerts into the program, it allows us to clearly communicate the priority of dependency upgrades against other foundational work. This also drove conversations around deprecation. Like all companies, we had a number of internal services that were currently, or soon to be, deprecated. By making these metrics clearly visible, it allowed us to quantify the risk of keeping these deprecated services running in production and led to service owners reprioritizing the work to fully shut down those services.

The cornerstone of GitHub’s Engineering Fundamentals program is a monthly synchronous meeting with engineering leadership and service owners. Every month, we define realistic goals for service owners to achieve in the next four weeks then review the progress against those goals. This allowed us to break down the nebulous task—fixing all open Dependabot alerts—into a clear set of tasks over a series of months. After integrating the Dependabot metrics with the program, we then made it a focus for engineering teams for a whole quarter of the year, which allowed us to build momentum on upgrading dependencies for services.

Outcomes

Our focus on Dependabot alerts was a success. By leveraging the Engineering Fundamentals program, we increased the percentage of services with zero Dependabot alerts from 68% up to 81%. This represents roughly 50 core GitHub services remediated in just three months, including several services performing large Rails upgrades to ensure they are using the most recent version. As the Engineering Fundamentals program runs continuously, this was not a one-off piece of work. Rather, the program allows us to follow the Dependabot alert metrics over time and intervene if we see them trending in the wrong direction.

After trialing this approach with Dependabot, we have since incorporated other GitHub Advanced Security tools and features, such as CodeQL into our Engineering Fundamentals program. By integrating more sources of security alerts, GitHub now has a more complete picture of the state of services across the company, which allows us to clearly prioritize work.

As an internal security team, GitHub’s Product Security Engineering Team faces many of the same challenges as our GitHub Enterprise users, and we use our experience to inform the design of GitHub features. Our emphasis on organization-wide metrics was a key part of measuring progress on this piece of work. That feedback has informed how we designed the Security Overview feature, which allows GitHub Enterprise users to easily see the current state of GitHub Advanced Security alerts across their organization.

Are you inspired to work at GitHub?

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

Closing the Gap Between Application Security and Observability

 

Description

Daniel Kaar, global director application security engineering at Dynatrace

Infosec Insiders columnist Daniel Kaar, global director application security engineering at Dynatrace.

When it’s all said and done, application security pros may come to look upon the Log4Shell vulnerability as a gift. ** **

Potentially one of the most devastating software flaws ever found, Log4Shell has justified scrutiny of modern security methods. It also turns out too many people continue to think about security strictly in terms of fortifying network perimeters.

But in the still burgeoning age of cloud computing, Log4Shell also exposed the significant gap that exists between application security and observability. It’s still not widely known that observability makes systems safer.

Nearly six months after the emergency of Log4Shell, the large number of companies still suffering the effects is proof. It comes down to this: Insufficient vulnerability management and a lack of visibility has hobbled efforts to identify and patch third-party software and development environments.

As a result, millions of apps remain at risk. Analysts predict that Log4Shell fallout will linger for years.

Protection means securing complex, distributed and high-velocity cloud architectures. Achieving this requires companies to adopt a modern development stack, one that arms security managers with greater observability and superior vulnerability management.

Traditional Application Security Tools Leave Too Many Questions

Analysts and journalists have described Log4Shell — the software vulnerability in Apache Log4j 2 discovered in November 2021 — as potentially one of the most devastating vulnerabilities ever found. Some security experts said the software flaw “bordered on the apocalyptic.”

Lest we forget: The security industry isn’t in trouble because of any single vulnerability. That became clear in March, with the emergence of Spring4Shell, a critical vulnerability targeting Java’s popular Spring open-source framework.

Companies struggle to identify vulnerabilities because traditional detection methods are too plodding, inefficient, and leave too many unanswered questions. In the past, security teams performed a static analysis known as software composition analysis (SCA) on code libraries to determine whether a vulnerability had affected their systems.

An SCA relies on scanning tools and manual procedures. Though they’re often effective, these methods are designed to identify vulnerabilities early in the development lifecycle — not uncovering vulnerabilities in code already in production.

In addition, SCA tools are also known to produce numerous false positives; they don’t provide vital detail, such as the potential impact of the vulnerability occurrences or whether the threatened repository is in production or in a pre-production environment.

They also don’t provide much insight into which areas are most at risk or should be prioritized.

Application Security-enabled Observability in Hours, Not Months

The good news is that when Log4Shell struck, some security managers were prepared. Some, including Jeroen Veldhorst, chief technology officer at Avisi, a Netherlands-based software development and cloud service company, had adopted a modern cloud observability platform.

According to Veldhorst, the application security and observability solution deployed by Avisi automatically identified and provided an overview of Log4Shell-vulnerable systems in Avisi’s production environment, Veldhorst said. The tool performed another automated and important task: providing the Avisi team with a list of systems to remediate first.

In the past, following the discovery of a new vulnerability, Veldhorst’s team would spend precious time patching low-priority occurrences. They were essentially guessing. Occasionally, the affected library his team labored on wasn’t even in production.

“Since [the tool] scans our platform continuously, it could tell us if there was a vulnerability [in production],” Veldhorst said.

Avisi’s observability and application security tool enabled the security team to accelerate the response to Log4Shell. Instead of spending days, weeks, or even months trying to resolve the issue via traditional methods, Avisi managed to resolve Log4Shell instances on all its systems within hours.

Combining observability and application security capabilities enables companies to reduce time spent resolving the last attack and more time preparing to thwart the next one.

For companies wishing to obtain an effective and mature observability platform, they must ensure that any upgrade comes with three integral components:

Vulnerability Detection and Mitigation

  • Application security platforms should automatically provide a prioritized list of potentially affected systems, the degree of exposure, and provide the ability for teams to perform direct remediation. Also, an application security remediation tracking screen for each vulnerability helps security teams spot and highlight whether each affected process still has a vulnerability loaded. Once each instance is resolved, observability-enabled application security tools automatically close the vulnerability report and then reopens it if a new instance of the problem is detected.

Incident Detection and Response

  • Application security and observability capabilities can be used to set up Log4Shell-specific attack monitoring and incident detection. This quickly identifies Log4Shell log patterns, and with the help of platform log analytics and alerting capabilities, teams can configure alerting mechanisms for attacks on their environments. Metrics and alerting systems also enable visibility into underlying code to quickly set up a dedicated alerting mechanism for any potential successful attacks on this critical vulnerability.

Coordination and Communication

  • The chief information security officer, the security team, engineering teams, and customer support teams can use application security and observability platforms to set up multiple daily status updates until all systems have been patched against a major vulnerability. This enables swift coordination against and mitigation of potential risks to environments and clear communication to customers.

Although only a small number of vendors can supply the entire list, these tools and the added security they provide mean they’re worth finding.

Through the continuous surveillance of an organization’s production environments, the appropriate AppSec tools can enable security teams to detect vulnerabilities such as Log4Shell and Spring4Shell in real-time and implement immediate remediation at scale.

380K Kubernetes API Servers Exposed to Public Internet

 

Description

More than 380,000 Kubernetes API servers allow some kind of access to the public internet, making the popular open-source container-orchestration engine for managing cloud deployments an easy target and broad attack surface for threat actors, researchers have found.

The Shadowserver Foundation discovered the access when it scanned the internet for Kubernetes API servers, of which there are more than 450,000, according to a blog post published this week.

Of the more than 450,000 Kubernetes API instances identified by Shadowserver, 381,645 responded with “200 OK,” researchers said. In all, Shadowserver found 454,729 Kubernetes API servers. The “open” API instances thus constitute nearly 84 percent of all instances that that Shadowserver scanned.

Moreover, most of the accessible Kubernetes servers—201,348, or nearly 53 percent–were found in the United States, according to the post.

While this response to the scan does not mean these servers are fully open or vulnerable to attacks, it does create a scenario in which the servers have an “unnecessarily exposed attack surface,” according to the post.

“This level of access was likely not intended,” researchers observed. The exposure also allows for information leakage on version and builds, they added.

Cloud Under Attack

The findings are troubling given that attackers already increasingly have been targeting Kubernetes cloud clusters as well as using them to launch other attacks against cloud services. Indeed, the cloud historically has suffered from rampant misconfiguration that continues to plague deployments, with Kubernetes being no exception.

In fact, Erfan Shadabi, cybersecurity expert with data-security firm comforte AG, said in an email to Threatpost that he was not surprised that the Shadowserver scan turned up so many Kubernetes servers exposed to the public internet.

“White [Kubernetes] provides massive benefits to enterprises for agile app delivery, there are a few characteristics that make it an ideal attack target for exploitation,” he said. “For instance, as a result of having many containers, Kubernetes has a large attack surface that could be exploited if not pre-emptively secured.”

Open-Source Security Exposed

The findings also raise the perennial issue of how to build security into open-source systems that become ubiquitous as part of modern internet and cloud-based infrastructure, making an attack on them an attack on the myriad systems to which they are connected.

This issue was highlighted all-too-unfortunately in the case of the Log4Shell vulnerability in the ubiquitous Java logging library Apache Log4j that was discovered last December.

The flaw, which is easily exploitable and can allow unauthenticated remote code execution (RCE) and complete server takeover–continues to be targeted by attackers. In fact, a recent report finding millions of Java applications still vulnerable despite a patch being available for Log4Shell.

An Achilles heel in particular of Kubernetes is that the data-security capabilities built into the platform are only at a “bare minimum”–protecting data at rest and data in motion, Shadabi said. In a cloud environment, this is a dangerous prospect.

“There’s no persistent protection of data itself, for example using industry accepted techniques like field-level tokenization,” he observed. “So if an ecosystem is compromised, it’s only a matter of time before the sensitive data being processed by it succumbs to a more insidious attack.”

Shadabi’s advice to organizations that use containers and Kubernetes in their production environments is to take securing Kubernetes as seriously as they do all aspects of their IT infrastructure, he said.

For its part, Shadowserver recommended that if administrators find that a Kubernetes instance in their environment is accessible to the internet, they should consider implementing authorization for access or block at the firewall level to reduce the exposed attack surface.

Researchers Expose Inner Workings of Billion-Dollar Wizard Spider Cybercrime Gang

 

Description

The inner workings of a cybercriminal group known as the Wizard Spider have been exposed, shedding light on its organizational structure and motivations.

“Most of Wizard Spider’s efforts go into hacking European and U.S. businesses, with a special cracking tool used by some of their attackers to breach high-value targets,” Swiss cybersecurity company PRODAFT said in a new report shared with The Hacker News. “Some of the money they get is put back into the project to develop new tools and talent.”

Wizard Spider, also known as Gold Blackburn, is believed to operate out of Russia and refers to a financially motivated threat actor that’s been linked to the TrickBot botnet, a modular malware that was officially discontinued earlier this year in favor of improved malware such as BazarBackdoor.

That’s not all. The TrickBot operators have also extensively cooperated with Conti, another Russia-linked cybercrime group notorious for offering ransomware-as-a-service packages to its affiliates.

Gold Ulrick (aka Grim Spider), as the group in charge of the development and distribution of the Conti (previously Ryuk) ransomware is called, has historically leveraged initial access provided by TrickBot to deploy the ransomware against targeted networks.

“Gold Ulrick is comprised of some or all of the same operators as Gold Blackburn, the threat group responsible for the distribution of malware such as TrickBot, BazarLoader, and Beur Loader,” cybersecurity firm Secureworks notes in a profile of the cybercriminal syndicate.

Stating that the group is “capable of monetizing multiple aspects of its operations,” PRODAFT emphasized the adversary’s ability to expand its criminal enterprise, which it said is made possible by the gang’s “extraordinary profitability.”

Typical attack chains involving the group commence with spam campaigns that distribute malware such as Qakbot (aka QBot) and SystemBC, using them as launchpads to drop additional tools, including Cobalt Strike for lateral movement, before executing the locker software.

In addition to leveraging a wealth of utilities for credential theft and reconnaissance, Wizard Spider is known to use an exploitation toolkit that takes advantage of known security vulnerabilities such as Log4Shell to gain an initial foothold into victim networks.

Also put to use is a cracking station that hosts cracked hashes associated with domain credentials, Kerberos tickets, and KeePass files, among others.

What’s more, the group has invested in a custom VoIP setup wherein hired telephone operators cold-call non-responsive victims in a bid to put additional pressure and compel them into paying up after a ransomware attack.

This is not the first time the group has resorted to such a tactic. Last year, Microsoft detailed a BazarLoader campaign dubbed BazaCall that employed phony call centers to lure unsuspecting victims into installing ransomware on their systems.

“The group has huge numbers of compromised devices at its command and employs a highly distributed professional workflow to maintain security and a high operational tempo,” the researchers said.

“It is responsible for an enormous quantity of spam on hundreds of millions of millions of devices, as well as concentrated data breaches and ransomware attacks on high-value targets.”

Iranian Hackers Leveraging BitLocker and DiskCryptor in Ransomware Attacks

 

Description

A ransomware group with an Iranian operational connection has been linked to a string of file-encrypting malware attacks targeting organizations in Israel, the U.S., Europe, and Australia.

Cybersecurity firm Secureworks attributed the intrusions to a threat actor it tracks under the moniker Cobalt Mirage, which it said is linked to an Iranian hacking crew dubbed Cobalt Illusion (aka APT35, Charming Kitten, Newscaster, or Phosphorus).

“Elements of Cobalt Mirage activity have been reported as Phosphorus and TunnelVision,” Secureworks Counter Threat Unit (CTU) said in a report shared with The Hacker News.

The threat actor is said to have conducted two different sets of intrusions, one of which relates to opportunistic ransomware attacks involving the use of legitimate tools like BitLocker and DiskCryptor for financial gain.

The second set of attacks are more targeted, carried out with the primary goal of securing access and gathering intelligence, while also deploying ransomware in select cases.

Initial access routes are facilitated by scanning internet-facing servers vulnerable to highly publicized flaws in Fortinet appliances and Microsoft Exchange Servers to drop web shells and using them as a conduit to move laterally and activate the ransomware.

“The threat actors completed the attack with an unusual tactic of sending a ransom note to a local printer,” the researchers said. “The note includes a contact email address and Telegram account to discuss decryption and recovery.”

However, the exact means by which the full volume encryption feature is triggered remains unknown, Secureworks said, detailing a January 2022 attack against an unnamed U.S. philanthropic organization.

Another intrusion aimed at a U.S. local government network in mid-March 2022 is believed to have leveraged Log4Shell flaws in the target’s VMware Horizon infrastructure to conduct reconnaissance and network scanning operations.

“The January and March incidents typify the different styles of attacks conducted by Cobalt Mirage,” the researchers concluded.

“While the threat actors appear to have had a reasonable level of success gaining initial access to a wide range of targets, their ability to capitalize on that access for financial gain or intelligence collection appears limited.”