This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

How we use Dependabot to secure GitHub

 

Description

At GitHub, we draw on our own experience using GitHub to build GitHub. As an example of this, we use a number of GitHub Advanced Security features internally. This post covers how we rolled out Dependabot internally to keep GitHub’s dependencies up to date. The rollout was managed by our Product Security Engineering Team who work with engineers throughout the entire development lifecycle to ensure that they’re confident in shipping new features and products. We’ll cover some background about how GitHub’s internal security teams think about security tooling in general, how we approached the Dependabot rollout, and how we integrated Dependabot into our existing engineering and security processes.

Keeping our dependencies up to date is one of the easiest ways to keep GitHub’s systems secure. The issue of supply chain security has become increasingly obvious in the past number of years, from the malicious flatmap-stream package, to the most recent log4shell vulnerabilities. Dependabot will alert developers when a repository is using a software dependency with a known vulnerability. By rolling out Dependabot internally to all of our repositories, we can measure, and significantly reduce, our usage of software dependencies with known vulnerabilities.

How we approach new tools and processes

Within Product Security Engineering, we spend a lot of time thinking about how new security tools and processes may impact the day-to-day work of our engineers. We use a number of guiding principles when evaluating tools and designing a rollout plan. For example, Does the security benefit of this new process outweigh the impact on engineering teams? How do we roll this out incrementally and gather feedback? What are our expectations for engineers, and how do we clearly communicate these expectations?

For Dependabot in particular, some of these questions were easy to answer. Dependabot is a native feature of GitHub, meaning, that it integrates with our engineers’ current workflows on GitHub.com. By better tracking the security of our software supply chain, we will keep GitHub and our users secure, which outweighs any potential impact on engineering teams.

We used a three-stage process to roll out Dependabot at GitHub: measure the current state of Dependabot alerts, a staged rollout to enable Dependabot incrementally over the organization, and finally, focus on remediating repositories with open Dependabot alerts.

Measurement

Our first aim was to accurately measure the current state of dependencies internally. We were not yet concerned with the state of any particular repository, but wanted to understand the general risk across the company. We did this by building internal tools to gather statistics about Dependabot alerts across the whole organization via the public GraphQL API. Getting this tooling in place early allowed us to gather metrics continuously and understand the general trends within GitHub before, during, and after the rollout.

Dependabot, like other GitHub Advanced Security features, can be enabled for all repositories within an organization from the organization’s administration page. However, GitHub has several thousand repositories internally, and we were aware that enabling Dependabot organization-wide could have a large impact on teams. We mitigated this impact in two ways: a staged rollout and frequent company-wide communications.

Rollout

A staged rollout allowed us to gather feedback from an initial set of repository owners before proceeding with the organization-wide rollout. We use this approach internally for security tools within GitHub, as we believe that unshipping a new tool or process can cause even more confusion across the company. For Dependabot, we decided on enabling the feature initially on a subset of our most active repositories to ensure that we could gather useful feedback. We then expanded it to a larger subset, before finally enabling the feature organization wide.

As a heavily-distributed company working asynchronously across multiple timezones, we used a mixture of GitHub Issues and GitHub Discussions to share new tools and processes with engineers. We aimed to answer, clearly and succinctly, the most important questions in our communications: What are we doing? Why are we doing this? When are we doing this? Lastly, what do I need to do? The last question was key. We made it clear that we were rolling out Dependabot organization-wide to understand our current risk and that, while we encourage repository owners to upgrade dependencies, we were not expecting every Dependabot alert to be fixed right away.

We also used these discussions as a touchpoint for other related tasks, such as encouraging teams to archive repositories if they are no longer in use. In organizations, there are always early adopters of new tools and features. Although we clearly laid out our incremental rollout plan, we also encouraged teams to enable Dependabot right away if it made sense for their repositories.

All in all, we made the initial ship to 200 repositories, followed up in 30 days with another 1,000 repositories, and enabled it organization-wide at the 45-day point from our initial ship. After enabling it organization-wide, we also used the "Automatically enabled for new repositories" feature to ensure that new repositories are following best practices by default.

Remediation

Once we had Dependabot enabled for all GitHub repositories, we could measure the general trend of Dependabot alerts across the company. Using our tooling, we could see that the general trend of Dependabot alerts across the company was broadly flat. We now switched our focus from measuring the current state to working with repository owners to upgrade our dependencies.

We managed this process through GitHub’s internal Service Catalog tool. This is the single source of truth within GitHub for services running inside GitHub and defines where a service is deployed, who owns the service, and how to contact them. The s_ervice_ concept is an abstraction over repositories. The Service Catalog only tracks repositories that are currently deployed inside GitHub, which is a small subset of repositories. By leveraging the Service Catalog, we could ensure that we focused our remediation efforts on repositories that are running in production, where a vulnerable dependency could present a risk to GitHub and our users.

Each service can have domain-specific metrics associated with them, and we built tooling to continuously pull Dependabot data via the GitHub REST API and upload it to the Service Catalog:

The Service Catalog allows us to assign service level objectives (SLOs) to individual metrics. We acknowledge that not all Dependabot alerts can be actioned immediately. Instead, we assign a realistic grace period for service owners to remediate Dependabot alerts before marking a metric as failing.

At this point, the Service Catalog metrics showed that around one-third of services had Dependabot alerts that needed remediation. We then needed a process for prioritizing and managing the work of upgrading dependencies across the company. We decided to integrate with GitHub’s internal Engineering Fundamentals program. This takes a company-wide view of the various metrics in the Service Catalog that we consider the baseline for well-maintained, available, and secure services.

The program is all about prioritization: given the current set of services not meeting baseline expectations, what is the priority for service owners right now? By integrating Dependabot alerts into the program, it allows us to clearly communicate the priority of dependency upgrades against other foundational work. This also drove conversations around deprecation. Like all companies, we had a number of internal services that were currently, or soon to be, deprecated. By making these metrics clearly visible, it allowed us to quantify the risk of keeping these deprecated services running in production and led to service owners reprioritizing the work to fully shut down those services.

The cornerstone of GitHub’s Engineering Fundamentals program is a monthly synchronous meeting with engineering leadership and service owners. Every month, we define realistic goals for service owners to achieve in the next four weeks then review the progress against those goals. This allowed us to break down the nebulous task—fixing all open Dependabot alerts—into a clear set of tasks over a series of months. After integrating the Dependabot metrics with the program, we then made it a focus for engineering teams for a whole quarter of the year, which allowed us to build momentum on upgrading dependencies for services.

Outcomes

Our focus on Dependabot alerts was a success. By leveraging the Engineering Fundamentals program, we increased the percentage of services with zero Dependabot alerts from 68% up to 81%. This represents roughly 50 core GitHub services remediated in just three months, including several services performing large Rails upgrades to ensure they are using the most recent version. As the Engineering Fundamentals program runs continuously, this was not a one-off piece of work. Rather, the program allows us to follow the Dependabot alert metrics over time and intervene if we see them trending in the wrong direction.

After trialing this approach with Dependabot, we have since incorporated other GitHub Advanced Security tools and features, such as CodeQL into our Engineering Fundamentals program. By integrating more sources of security alerts, GitHub now has a more complete picture of the state of services across the company, which allows us to clearly prioritize work.

As an internal security team, GitHub’s Product Security Engineering Team faces many of the same challenges as our GitHub Enterprise users, and we use our experience to inform the design of GitHub features. Our emphasis on organization-wide metrics was a key part of measuring progress on this piece of work. That feedback has informed how we designed the Security Overview feature, which allows GitHub Enterprise users to easily see the current state of GitHub Advanced Security alerts across their organization.

Are you inspired to work at GitHub?

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

Closing the Gap Between Application Security and Observability

 

Description

Daniel Kaar, global director application security engineering at Dynatrace

Infosec Insiders columnist Daniel Kaar, global director application security engineering at Dynatrace.

When it’s all said and done, application security pros may come to look upon the Log4Shell vulnerability as a gift. ** **

Potentially one of the most devastating software flaws ever found, Log4Shell has justified scrutiny of modern security methods. It also turns out too many people continue to think about security strictly in terms of fortifying network perimeters.

But in the still burgeoning age of cloud computing, Log4Shell also exposed the significant gap that exists between application security and observability. It’s still not widely known that observability makes systems safer.

Nearly six months after the emergency of Log4Shell, the large number of companies still suffering the effects is proof. It comes down to this: Insufficient vulnerability management and a lack of visibility has hobbled efforts to identify and patch third-party software and development environments.

As a result, millions of apps remain at risk. Analysts predict that Log4Shell fallout will linger for years.

Protection means securing complex, distributed and high-velocity cloud architectures. Achieving this requires companies to adopt a modern development stack, one that arms security managers with greater observability and superior vulnerability management.

Traditional Application Security Tools Leave Too Many Questions

Analysts and journalists have described Log4Shell — the software vulnerability in Apache Log4j 2 discovered in November 2021 — as potentially one of the most devastating vulnerabilities ever found. Some security experts said the software flaw “bordered on the apocalyptic.”

Lest we forget: The security industry isn’t in trouble because of any single vulnerability. That became clear in March, with the emergence of Spring4Shell, a critical vulnerability targeting Java’s popular Spring open-source framework.

Companies struggle to identify vulnerabilities because traditional detection methods are too plodding, inefficient, and leave too many unanswered questions. In the past, security teams performed a static analysis known as software composition analysis (SCA) on code libraries to determine whether a vulnerability had affected their systems.

An SCA relies on scanning tools and manual procedures. Though they’re often effective, these methods are designed to identify vulnerabilities early in the development lifecycle — not uncovering vulnerabilities in code already in production.

In addition, SCA tools are also known to produce numerous false positives; they don’t provide vital detail, such as the potential impact of the vulnerability occurrences or whether the threatened repository is in production or in a pre-production environment.

They also don’t provide much insight into which areas are most at risk or should be prioritized.

Application Security-enabled Observability in Hours, Not Months

The good news is that when Log4Shell struck, some security managers were prepared. Some, including Jeroen Veldhorst, chief technology officer at Avisi, a Netherlands-based software development and cloud service company, had adopted a modern cloud observability platform.

According to Veldhorst, the application security and observability solution deployed by Avisi automatically identified and provided an overview of Log4Shell-vulnerable systems in Avisi’s production environment, Veldhorst said. The tool performed another automated and important task: providing the Avisi team with a list of systems to remediate first.

In the past, following the discovery of a new vulnerability, Veldhorst’s team would spend precious time patching low-priority occurrences. They were essentially guessing. Occasionally, the affected library his team labored on wasn’t even in production.

“Since [the tool] scans our platform continuously, it could tell us if there was a vulnerability [in production],” Veldhorst said.

Avisi’s observability and application security tool enabled the security team to accelerate the response to Log4Shell. Instead of spending days, weeks, or even months trying to resolve the issue via traditional methods, Avisi managed to resolve Log4Shell instances on all its systems within hours.

Combining observability and application security capabilities enables companies to reduce time spent resolving the last attack and more time preparing to thwart the next one.

For companies wishing to obtain an effective and mature observability platform, they must ensure that any upgrade comes with three integral components:

Vulnerability Detection and Mitigation

  • Application security platforms should automatically provide a prioritized list of potentially affected systems, the degree of exposure, and provide the ability for teams to perform direct remediation. Also, an application security remediation tracking screen for each vulnerability helps security teams spot and highlight whether each affected process still has a vulnerability loaded. Once each instance is resolved, observability-enabled application security tools automatically close the vulnerability report and then reopens it if a new instance of the problem is detected.

Incident Detection and Response

  • Application security and observability capabilities can be used to set up Log4Shell-specific attack monitoring and incident detection. This quickly identifies Log4Shell log patterns, and with the help of platform log analytics and alerting capabilities, teams can configure alerting mechanisms for attacks on their environments. Metrics and alerting systems also enable visibility into underlying code to quickly set up a dedicated alerting mechanism for any potential successful attacks on this critical vulnerability.

Coordination and Communication

  • The chief information security officer, the security team, engineering teams, and customer support teams can use application security and observability platforms to set up multiple daily status updates until all systems have been patched against a major vulnerability. This enables swift coordination against and mitigation of potential risks to environments and clear communication to customers.

Although only a small number of vendors can supply the entire list, these tools and the added security they provide mean they’re worth finding.

Through the continuous surveillance of an organization’s production environments, the appropriate AppSec tools can enable security teams to detect vulnerabilities such as Log4Shell and Spring4Shell in real-time and implement immediate remediation at scale.

380K Kubernetes API Servers Exposed to Public Internet

 

Description

More than 380,000 Kubernetes API servers allow some kind of access to the public internet, making the popular open-source container-orchestration engine for managing cloud deployments an easy target and broad attack surface for threat actors, researchers have found.

The Shadowserver Foundation discovered the access when it scanned the internet for Kubernetes API servers, of which there are more than 450,000, according to a blog post published this week.

Of the more than 450,000 Kubernetes API instances identified by Shadowserver, 381,645 responded with “200 OK,” researchers said. In all, Shadowserver found 454,729 Kubernetes API servers. The “open” API instances thus constitute nearly 84 percent of all instances that that Shadowserver scanned.

Moreover, most of the accessible Kubernetes servers—201,348, or nearly 53 percent–were found in the United States, according to the post.

While this response to the scan does not mean these servers are fully open or vulnerable to attacks, it does create a scenario in which the servers have an “unnecessarily exposed attack surface,” according to the post.

“This level of access was likely not intended,” researchers observed. The exposure also allows for information leakage on version and builds, they added.

Cloud Under Attack

The findings are troubling given that attackers already increasingly have been targeting Kubernetes cloud clusters as well as using them to launch other attacks against cloud services. Indeed, the cloud historically has suffered from rampant misconfiguration that continues to plague deployments, with Kubernetes being no exception.

In fact, Erfan Shadabi, cybersecurity expert with data-security firm comforte AG, said in an email to Threatpost that he was not surprised that the Shadowserver scan turned up so many Kubernetes servers exposed to the public internet.

“White [Kubernetes] provides massive benefits to enterprises for agile app delivery, there are a few characteristics that make it an ideal attack target for exploitation,” he said. “For instance, as a result of having many containers, Kubernetes has a large attack surface that could be exploited if not pre-emptively secured.”

Open-Source Security Exposed

The findings also raise the perennial issue of how to build security into open-source systems that become ubiquitous as part of modern internet and cloud-based infrastructure, making an attack on them an attack on the myriad systems to which they are connected.

This issue was highlighted all-too-unfortunately in the case of the Log4Shell vulnerability in the ubiquitous Java logging library Apache Log4j that was discovered last December.

The flaw, which is easily exploitable and can allow unauthenticated remote code execution (RCE) and complete server takeover–continues to be targeted by attackers. In fact, a recent report finding millions of Java applications still vulnerable despite a patch being available for Log4Shell.

An Achilles heel in particular of Kubernetes is that the data-security capabilities built into the platform are only at a “bare minimum”–protecting data at rest and data in motion, Shadabi said. In a cloud environment, this is a dangerous prospect.

“There’s no persistent protection of data itself, for example using industry accepted techniques like field-level tokenization,” he observed. “So if an ecosystem is compromised, it’s only a matter of time before the sensitive data being processed by it succumbs to a more insidious attack.”

Shadabi’s advice to organizations that use containers and Kubernetes in their production environments is to take securing Kubernetes as seriously as they do all aspects of their IT infrastructure, he said.

For its part, Shadowserver recommended that if administrators find that a Kubernetes instance in their environment is accessible to the internet, they should consider implementing authorization for access or block at the firewall level to reduce the exposed attack surface.

Researchers Expose Inner Workings of Billion-Dollar Wizard Spider Cybercrime Gang

 

Description

The inner workings of a cybercriminal group known as the Wizard Spider have been exposed, shedding light on its organizational structure and motivations.

“Most of Wizard Spider’s efforts go into hacking European and U.S. businesses, with a special cracking tool used by some of their attackers to breach high-value targets,” Swiss cybersecurity company PRODAFT said in a new report shared with The Hacker News. “Some of the money they get is put back into the project to develop new tools and talent.”

Wizard Spider, also known as Gold Blackburn, is believed to operate out of Russia and refers to a financially motivated threat actor that’s been linked to the TrickBot botnet, a modular malware that was officially discontinued earlier this year in favor of improved malware such as BazarBackdoor.

That’s not all. The TrickBot operators have also extensively cooperated with Conti, another Russia-linked cybercrime group notorious for offering ransomware-as-a-service packages to its affiliates.

Gold Ulrick (aka Grim Spider), as the group in charge of the development and distribution of the Conti (previously Ryuk) ransomware is called, has historically leveraged initial access provided by TrickBot to deploy the ransomware against targeted networks.

“Gold Ulrick is comprised of some or all of the same operators as Gold Blackburn, the threat group responsible for the distribution of malware such as TrickBot, BazarLoader, and Beur Loader,” cybersecurity firm Secureworks notes in a profile of the cybercriminal syndicate.

Stating that the group is “capable of monetizing multiple aspects of its operations,” PRODAFT emphasized the adversary’s ability to expand its criminal enterprise, which it said is made possible by the gang’s “extraordinary profitability.”

Typical attack chains involving the group commence with spam campaigns that distribute malware such as Qakbot (aka QBot) and SystemBC, using them as launchpads to drop additional tools, including Cobalt Strike for lateral movement, before executing the locker software.

In addition to leveraging a wealth of utilities for credential theft and reconnaissance, Wizard Spider is known to use an exploitation toolkit that takes advantage of known security vulnerabilities such as Log4Shell to gain an initial foothold into victim networks.

Also put to use is a cracking station that hosts cracked hashes associated with domain credentials, Kerberos tickets, and KeePass files, among others.

What’s more, the group has invested in a custom VoIP setup wherein hired telephone operators cold-call non-responsive victims in a bid to put additional pressure and compel them into paying up after a ransomware attack.

This is not the first time the group has resorted to such a tactic. Last year, Microsoft detailed a BazarLoader campaign dubbed BazaCall that employed phony call centers to lure unsuspecting victims into installing ransomware on their systems.

“The group has huge numbers of compromised devices at its command and employs a highly distributed professional workflow to maintain security and a high operational tempo,” the researchers said.

“It is responsible for an enormous quantity of spam on hundreds of millions of millions of devices, as well as concentrated data breaches and ransomware attacks on high-value targets.”

Iranian Hackers Leveraging BitLocker and DiskCryptor in Ransomware Attacks

 

Description

A ransomware group with an Iranian operational connection has been linked to a string of file-encrypting malware attacks targeting organizations in Israel, the U.S., Europe, and Australia.

Cybersecurity firm Secureworks attributed the intrusions to a threat actor it tracks under the moniker Cobalt Mirage, which it said is linked to an Iranian hacking crew dubbed Cobalt Illusion (aka APT35, Charming Kitten, Newscaster, or Phosphorus).

“Elements of Cobalt Mirage activity have been reported as Phosphorus and TunnelVision,” Secureworks Counter Threat Unit (CTU) said in a report shared with The Hacker News.

The threat actor is said to have conducted two different sets of intrusions, one of which relates to opportunistic ransomware attacks involving the use of legitimate tools like BitLocker and DiskCryptor for financial gain.

The second set of attacks are more targeted, carried out with the primary goal of securing access and gathering intelligence, while also deploying ransomware in select cases.

Initial access routes are facilitated by scanning internet-facing servers vulnerable to highly publicized flaws in Fortinet appliances and Microsoft Exchange Servers to drop web shells and using them as a conduit to move laterally and activate the ransomware.

“The threat actors completed the attack with an unusual tactic of sending a ransom note to a local printer,” the researchers said. “The note includes a contact email address and Telegram account to discuss decryption and recovery.”

However, the exact means by which the full volume encryption feature is triggered remains unknown, Secureworks said, detailing a January 2022 attack against an unnamed U.S. philanthropic organization.

Another intrusion aimed at a U.S. local government network in mid-March 2022 is believed to have leveraged Log4Shell flaws in the target’s VMware Horizon infrastructure to conduct reconnaissance and network scanning operations.

“The January and March incidents typify the different styles of attacks conducted by Cobalt Mirage,” the researchers concluded.

“While the threat actors appear to have had a reasonable level of success gaining initial access to a wide range of targets, their ability to capitalize on that access for financial gain or intelligence collection appears limited.”

What's Changed for Cybersecurity in Banking and Finance: New Study

 

Description

What's Changed for Cybersecurity in Banking and Finance: New Study

Cybersecurity in financial services is a complex picture. Not only has a range of new tech hit the industry in the last 5 years, but compliance requirements introduce another layer of difficulty to the lives of infosec teams in this sector. To add to this picture, the overall cybersecurity landscape has rapidly transformed, with ransomware attacks picking up speed and high-profile vulnerabilities hitting the headlines at an alarming pace.

VMware recently released the 5th annual installment of their Modern Bank Heists report, and the results show a changing landscape for cybersecurity in banking and finance. Here’s a closer look at what CISOs and security leaders in finance said about the security challenges they’re facing — and what they’re doing to solve them.

Destructive threats and ransomware attacks on banks are increasing

The stakes for cybersecurity are higher than ever at financial institutions, as threat actors are increasingly using more vicious tactics. Banks have seen an uptick in destructive cyberattacks — those that delete data, damage hard drives, disrupt network connections, or otherwise leave a trail of digital wreckage in their wake.

63% of financial institutions surveyed in the VMware report said they’ve seen an increase in these destructive attacks targeting their organization — that’s 17% more than said the same in last year’s version of the report.

At the same time, finance hasn’t been spared from the rise in ransomware attacks, which have also become increasingly disruptive. Nearly 3 out of 4 respondents to the survey said they’d been hit by at least one ransomware attack. What’s more, 63% of those ended up paying the ransom.

Supply chain security: No fun in the sun

Like ransomware, island hopping is also on the rise — and while that might sound like something to do on a beach vacation, that’s likely the last thing the phrase brings to mind for security pros at today’s financial institutions.

IT Pro describes island hopping attacks as “the process of undermining a company’s cyber defenses by going after its vulnerable partner network, rather than launching a direct attack.” The source points to the high-profile data breach that rocked big-box retailer Target in 2017. Hackers found an entry point to the company’s data not through its own servers, but those of Fazio Mechanical Services, a third-party vendor.

In the years since the Target breach, supply chain cybersecurity has become an even greater area of focus for security pros across industries, thanks to incidents like the SolarWinds breach and large-scale vulnerabilities like Log4Shell that reveal just how many interdependencies are out there. Now, threats in the software supply chain are becoming more apparent by the day.

VMware’s study found that 60% of security leaders in finance have seen an increase in island hopping attacks — 58% more than said the same last year. The uptick in threats originating from partners’ systems is clearly keeping security officers up at night: 87% said they’re concerned about the security posture of the service providers they rely on.

The proliferation of mobile and web applications associated with the rise of financial technology (fintech) may be exacerbating the problem. VMware notes API attacks are one of the primary methods of island hopping — and they found a whopping 94% of financial-industry security leaders have experienced an API attack through a fintech application, while 58% said they’ve seen an increase in application security incidents overall.

How financial institutions are improving cybersecurity

With attacks growing more dangerous and more frequent, security leaders in finance are doubling down on their efforts to protect their organizations. The majority of companies surveyed in VMware’s study said they planned a 20% to 30% boost to their cybersecurity budget in 2022. But what types of solutions are they investing in with that added cash?

The number 1 security investment for CISOs this year is extended detection and response (XDR), with 24% listing this as their top priority. Closely following were workload security at 22%, mobile security at 21%, threat intelligence at 15%, and managed detection and response (MDR) at 11%. In addition, 51% said they’re investing in threat hunting to help them stay ahead of the attackers.

Today’s threat landscape has grown difficult to navigate — especially when financial institutions are competing for candidates in a tight cybersecurity talent market. In the meantime, the financial industry has only grown more competitive, and the pace of innovation is at an all-time high. Having powerful, flexible tools that can streamline and automate security processes is essential to keep up with change. For banks and finance organizations to attain the level of visibility they need to innovate while keeping their systems protected, these tools are crucial.

Additional reading:

NEVER MISS A BLOG

Conti Ransomware Attack Spurs State of Emergency in Costa Rica

 

Description

Costa Rican President Rodrigo Chaves declared a state of national cybersecurity emergency over the weekend following a financially motivated Conti ransomware attack against his administration that has hamstrung the government and economy of the Latin American nation.

The attack—attributed to the prolific Conti ransomware group–occurred three weeks ago not long after Chaves took office; in fact, the state of emergency was one of his first decrees as president. The first government agency attacked was the Ministry of Finance, which has been without digital services since April 18, according to a published report.

Conti—a top-tier Russian-speaking ransomware group–is known as one of the most ruthless gangs in the game, with a take-no-prisoners approach specializing in double extortion, a method in which attackers threaten to expose stolen data or use it for future attacks if victims don’t pay by a deadline.

Conti acts on a ransomware-as-a-service (RaaS) model, with a vast network of affiliates and access brokers at its disposal to do its dirty work. The group also is known for targeting organizations for which attacks could have life-threatening consequences, such as hospitals, emergency number dispatch carriers, emergency medical services and law-enforcement agencies.

The attack on Costa Rica could be a sign of more Conti activity to come, as the group posted a message on their news site to the Costa Rican government that the attack is merely a “demo version.” The group also said the attack was solely motivated by financial gain as well as expressed general political disgust, another signal of more government-directed attacks.

Next-Level Incident

The incident demonstrates how a cyber-attack can potentially be as serious as a military action or a natural disaster especially when it affects a developing nation like Costa Rica, a security professional observed.

“Costa Rica’s state-of-emergency following an attack from Conti is an important rallying call to the rest of the world,” Silas Cutler, principal reverse engineer for security firm Stairwell, wrote in an e-mail to Threatpost. “While the emergency status may have a limited direct impact … it puts the severity of this breach into the same category as a natural disaster or military incident.”

The double-extortion aspect of not only Conti’s but also a number of other ransomware group’s methods also can embolden more ransomware attacks because most targeted organizations will pay rather than risk the leak of sensitive data—providing more incentive to threat actors, noted another security professional.

“It is a large reason why most victims are paying today,” observed Roger Grimes, data-driven defense evangelist for security firm KnowBe4, in an email to Threatpost.

Conti likely has every employee’s personal login credentials to any Costa Rican government site that they visited during the time the ransomware was active on the system before it locked files, which poses a big problem for citizens using government services online if Conti indeed has leaked the info, he said.

“If Costa Rica was hosting customer-facing websites in the compromised domains, like they likely were, their customers’ credentials–which are often reused on other sites and services the customers visit–are likely compromised, too,” Grimes said. “Not paying the ransom puts not only Costa Rica’s own services at risk, but those of their employees and customers.”

Indeed, last year the city of Tulsa, OK, put its citizens on alert for potential cyber fraud after Conti leaked some 18,000 city files, mostly police citations, on the dark web following a ransomware attack on the city’s government.

U.S. Offering Aid

To help prevent future attacks like the one on Costa Rica, the U.S. government said last week that it’s offering a hefty reward–up to $10 million–for information leading to the identification and/or location of any of Conti Group’s leaders. The U.S. also will offer up to $5 million for info that can lead to the arrest or conviction of anyone conspiring in a Conti ransomware attack.

To date, Conti has been responsible for hundreds of ransomware incidents over the past two years, with more than 1,000 victims paying more than $150 million to the group, according to the FBI. This gives Conti the dubious honor of being the costliest ransomware strain ever documented, according to the feds.

While authorities pursue Conti, governments can take a number of steps to prevent ransomware attacks, security professionals noted. One is to adopt a cultural change when it comes to cybersecurity, observed Chris Clements, vice president of solutions architecture at security firm Cerberus Sentinel.

Governments should shift their focus from the historic mentality of cyber-security as an “IT cost center” toward one that views it as “a culturally ingrained approach that identifies cybersecurity investment, both in tools and people, as a critical strategic defensive shield,’ he said in an email to Threatpost.

“Until this changes, the problem of cyber-attack is going to get worse before it gets any better,” Clements said in an email to Threatpost.

Governments also can take proactive steps such as conducting perimeter reviews as a means of mitigating some of the methods Conti-affiliated access brokers use to infiltrate systems, Cutler suggested. This can better secure their perimeters and allow them to react faster to attacks.

However, even this “will not fully prevent these types of attacks” given the network of affiliates and access brokers that RaaS groups like Conti have at its disposal to breach systems, he said.