Home Podcasts Videos Guest Posts Q&A My Take Bio Contact
 

Privacy

 

GUEST ESSAY: Here’s why EDR and XDR systems failed to curtail the ransomware wave of 2021

By Eddy Bobritsky

Looking back, 2021 was a breakout year for ransomware around the globe, with ransoms spiking to unprecedented multi-million dollar amounts.

Related: Colonial Pipeline attack ups ransomware ante

All this while Endpoint Detection and Response system (EDR) installations are at an all-time high. EDR systems are supposed to protect IT system endpoints against these very malware, ransomware, and other types of malicious code

Despite investing in some of the best detection and response technologies, companies with EDRs are still experiencing ransomware attacks. Surprisingly, during the same timeframe in which EDRs became more popular, not only have malware and ransomware attacks become more frequent, it now takes an average of 287 days to detect and contain a data breach, according to IBM’s 2021 Cost of a Data Breach Report 2021.

Infection required

So, why is this happening if so many companies are adopting EDR and XDR solutions, which are supposed to neutralize these threats?

In short, it’s just about the way EDRs and XDRs work. EDRs, by design, aren’t really equipped to prevent 100 percent of malware and ransomware attacks.

When most EDRs detect malicious behavior, they develop a response in order to stop the attack from causing more damage.

SHARED INTEL: Data breaches across the globe slowed significantly in Q4 2021 versus Q1-Q3

By Vytautas Kaziukonis

After a gloomy start with its first three breach intensive quarters, 2021 has finally ended, and on a positive note.

Related: Cybersecurity experts reflect on 2021

This conclusion is derived from an analysis of data taken from our data breach detection tool, Surfshark Alert, which comprises publicly available breached data sets to inform our users of potential threats.

Our analysis looked into data breaches that occurred from October to December 2021 (Q4) and compared them with the numbers from July through August 2021 (Q3). Breached accounts were analyzed according to the country’s origin, and the actual time the breach was recorded.

All information either stolen or taken from a system without the authorization of the platform’s owner (in other words, proactively hacked or scrapped) is considered a data breach. Data associations to specific breach instances are only stipulated. Full study data is available here.

GUEST ESSAY: The case for network defenders to focus on leading — not lagging — indicators

By Rohit Sethi

A key CEO responsibility is reporting results that deliver on a company’s mission to shareholders. This reporting often requires a host of metrics that define success, like Annual Recurring Revenue and sales for software as a service (SaaS) companies. These are lagging indicators where the results follow behind the work required to achieve them.

Related: Automating SecOps

Lagging indicators are separate from leading indicators that could include marketing leads, pipeline generation and demos. When it comes to sales targets there is a correlation between increased sales to shareholder value creation, but closing sales in B2B transactions can be time consuming. Ideally, companies should know that their work will lead to the appropriate future lagging indicators, and not the other way around.

Leading indicators provide a shorter feedback loop. This enables employees to drive improvement, and are more motivating because employees know what they have to do to succeed. In cybersecurity we often face a bias towards lagging indicators, unfortunately.

Cybersecurity nuances

One could argue that the true lagging indicator in cybersecurity is a breach, and that anything that helps prevent a breach, like adopting a “shift left” philosophy as part of a DevSecOps initiative, is a leading indicator.

However, “vulnerabilities” are lagging indicators because you don’t know how many vulnerabilities you have until you test for them. If targets such as defect density or compliance to scanner policy (i.e. having only a certain number of “allowable” vulnerabilities before releasing software) are the only targets, there are few ways of predicting success.

GUEST ESSAY: JPMorgan’s $200 million in fines stems from all-too-common compliance failures

By Dima Gutzeit

Last month’s $125 million Security and Exchange Commission (SEC) fine combined with the $75 million U.S. Commodity Futures Trading Commission (CFTC) fine against JPMorgan sent shockwaves through financial and other regulated customer-facing industries.

Related: Why third-party risks are on the rise

According to a SEC release, hefty fines brought against JPMorgan, and its subsidiaries were based on “widespread and longstanding failures by the firm and its employees to maintain and preserve written communications”. These views were echoed in a CFTC release as well.

While the price tag of these violations was shocking, the compliance failure was not. The ever-changing landscape of rapid communication via instant messaging apps, such as WhatsApp, Signal, WeChat, Telegram, and others, has left regulated industries to find a balance between compliance and efficient client communication.

Insecure platforms

Approved forms of communication such as phone calls, emails, and fax are viewed by some consumers as obsolete. So, as teams work to remain relevant, team leaders and employees carry the burden of ensuring a better and more intuitive customer experience.

Many of these instant messaging platforms are secure, even offering end-to-end encryption, so the lack of security is not necessarily in the apps themselves. Without a responsible business communication platform for these conversations to flow through, customer requests and discussions live only on employees’ personal devices.

GUEST ESSAY: 5 tips for ‘de-risking’ work scenarios that require accessing personal data

By Alexey Kessenikh

Working with personal data in today’s cyber threat landscape is inherently risky.

Related: The dangers of normalizing encryption for government use

It’s possible to de-risk work scenarios involving personal data by carrying out a classic risk assessment of an organization’s internal and external infrastructure. This can include:

Security contours. Setting up security contours for certain types of personal data can be useful for:

•Nullifying threats and risks applicable to general infrastructural components and their environment.

•Planning required processes and security components when initially building your architecture.

•Helping ensure data privacy.

Unique IDs. It is also possible to obfuscate personal data by replacing it with unique identifiers (UID). This de-risks personal data that does not fit in a separate security contour.

Implementing a UID system can reduce risk when accessing personal data for use in analytical reports, statistical analysis, or for client support.

GUEST ESSAY: Going beyond watermarks to protect sensitive documents from illegal access

By Julia Demyanchuk

Cyber threats continue to gain momentum and there are still not enough ways to counter it.

Related: Why the ‘Golden Age’ of cyber espionage is upon us.

The global threat intelligence market size was estimated at $10.9 billion in 2020 and will grow to $16.1 billion by 2025. Yet, according to the study by the Ponemon Institute, the number of insider leaks has increased by 47 percent in 2020 compared to 2018. As a result,

The majority of businesses (55 percent) are using some sort of a tool to monitor for insider threats; including data leak prevention (DLP) software (54 percent), user behavior analytics (UBA) software (50 percent), and employee monitoring and surveillance (47 percent).

They also enrich documents with metadata and place them in crypto-containers, access to which is only granted by permission. However, all of these solutions are powerless when it comes to photographing a document with a smartphone and compromising printed copies of documents. Therefore, these solutions cannot cope with such leaks.

MY TAKE: Why companies had better start taking the security pitfalls of API proliferation seriously

By Byron V. Acohido

APIs are putting business networks at an acute, unprecedented level of risk – a dynamic that has yet to be fully acknowledged by businesses.

Related: ‘SASE’ framework extends security to the network edge

That said, APIs are certain to get a lot more attention by security teams — and board members concerned about cyber risk mitigation — in 2022. This is so because a confluence of developments in 2021 has put API security in the spotlight, where it needs to be.

APIs have emerged as a go-to tool used by threat actors in the early phases of sophisticated, multi-stage network attacks. Upon gaining a toehold on a targeted device or server, attackers now quickly turn their attention to locating and manipulating available APIs.

“Threat actors have become aware that APIs represent a ton of exposed opportunity,” says Mike Spanbauer, security evangelist at Juniper Networks, a Sunnyvale, Calif.-based supplier of networking technology.

Over the past year, I’ve had several deep conversations parsing how APIs have emerged as a two-edged sword: APIs accelerate digital transformation, but they also vastly expand the attack surface of modern business networks.