Sponsored by:
< Back to Playlist
< Previous
Next >

As the threat of ransomware continues unabated, it’s becoming clear effective prevention requires a holistic tool that ensures resiliency no matter where data may lie. What’s needed is an end-to-end data management strategy that begins with visibility and control and extends across the entire data landscape, including on-premises, in the cloud, and across clouds.

In that sense, protecting against ransomware is no different from other cyber threats, for which a “defense-in-depth” strategy has long been the mantra. One way to look at it is like a progression, where at the top level tools such as firewalls filter out the most obvious threats and further down you do fine-filtering, such as with role-based access control and directories integrated with zero-trust strategies, says Christopher Winter, Technical Marketing Engineer with Veritas.

With respect to ransomware, ultimately the goal is to protect the data that perpetrators are targeting, which makes your backup infrastructure the last line of defense. So, an effective backup and recovery strategy is a strong insurance policy.

“Why pay the ransom if you can just recover your data?” Winter asks.

Elements of effective ransomware protection

The factors that contribute to delivering reliability and recovery at scale include multi-factor authentication, role-based access control, integrated protection and detection, and restricted remote access. The key is reliably delivering all of these across your entire landscape, including on-premises and cloud-based resources.

Another important component is the ability to defend against unauthorized data exfiltration, which has become a secondary objective of ransomware perpetrators in addition to encrypting your data. Perpetrators figure if they can steal your data and threaten to make it public, they now have more leverage in getting you to pay a ransom.

Protecting against that kind of threat requires anomaly detection, which increasingly involves the use of machine learning (ML) technology. By establishing a baseline of what constitutes normal data activity, an ML engine can detect actions that are out of the norm, such as a subtle increase in duplication rate or data extraction.

“It doesn’t mean something horrible happened, but it should trigger an investigative workflow,” Winter says. “Maybe conduct a malware scan or ask the server owner questions about whether something significant happened with their data.”

Effective data protection architectures

Another key to ransomware prevention is having effective data protection designs. Veritas recommends a “3-2-1-1” architecture that calls for having three copies of all data stored in at least two locations, one off-site and one immutable.

The Veritas NetBackup Auto Image Replication (AIR) feature makes off-site replication fast and easy, whether to a single backup recovery site or several of them – including cloud-based sites.

NetBackup also supports immutable backups, which are fixed, unchangeable, and undeletable—even by administrators. Think of them as similar to the old write once, read many (WORM) tape format, which was an immutable form of storage.

Finding an appropriate level of risk

One problem with WORM tape, of course, was that it got expensive over time. And cost issues haven’t gone away as a factor in determining your best ransomware protection strategy.

Fortunately, not every bit of data requires immutable storage. The goal is to determine an appropriate level of redundancy and data protection for your various data stores, without over- or under-spending.

“We’re often asked, ‘Put yourself in our shoes. Given our limiting factors, what would you do?’” Winter says.

He encourages customers to design with their limitations in mind while also planning for the future and what’s likely to change. The 3-2-1-1 approach can apply to any organization; it’s just a matter of how much data goes into various buckets, such as those that offer the most protection like immutable storage.

Making those sorts of determinations means calculating how much downtime or data loss your organization can afford for data related to various applications, then planning accordingly. Just do it now, before a bad actor forces you to do it the hard way.

Learn more at https://www.veritas.com/ransomware