This week it’s Petya. Last month, headlines about WannaCry ransomware briefly grabbed the world’s attention. The sad truth is that not a day passes without some organization being held hostage by encryption-based ransomware. Here are a few sobering facts:
- Nearly 40% of all businesses have been affected to some extent by ransomware.1
- More than 4,000 ransomware attacks have occurred every day since the beginning of 2016.2
- The first quarter of 2017 saw a sizable spike in ransomware activity.3
- A consumer gets hit by ransomware every 10 seconds (this is up from every 20 seconds in Q1 2016).4
- A company gets hit by ransomware every 40 seconds (this is up from every 2 minutes in Q1 2016).5
- Paying the ransom is no guarantee that your data will ever be unlocked!6
So, what can you do to protect your business? There are several proactive measures that you can take:
- Installing, configuring, and maintaining an endpoint security solution, with not just protection for file based threats, but also for things like downloads, browsers, firewalls, etc.
- Educating your users about the proper handling of unknown or suspicious emails with attachments (critical since 59% of ransomware infections are delivered this way).7
- Employing content scanning and filtering on your mail servers, to scan for known threats and block any attachment types that could pose a threat.
- Regularly updating vulnerable software to help prevent infection, making sure that your operating systems and applications have the latest patches to protect against known vulnerabilities.
- Installing and configuring Intrusion Detection Systems (IDS) or Intrusion Protection Systems (IPS) to detect and prevent any communication attempts malware may use to create the encryption keys required to encrypt your data.
- Blocking your end users from being able to execute malware, using solutions that prevent users from running downloaded applications, or prevent downloaded threats from being launched.
It is important to remember, however, that even if you do everything right, you can still be hit by a zero-day vulnerability threat which wasn’t caught by your defenses. But if that does happen, there is still hope — all may not be lost.
Stratus’s everRun software uses enhanced virtualization technology to provide either a fault tolerant or highly available environment for your critical systems. everRun’s capabilities include enabling users to take snapshots of virtual machines (VMs). This can be scheduled, with the snapshots saved to a safe storage repository. By utilizing snapshots to create regularly scheduled backups, you can significantly limit your exposure and reduce the risk of being impacted by a ransomware attack. You do this by having the ability to recreate a “clean” system through restoring virtual machines from those snapshots. In the event of a ransomware infection, you can easily restore your systems to a known good state prior to the ransomware infection from a VM snapshot, just as you would if you were recovering a VM whose data were somehow corrupted. While you might lose a small amount of data between the time of the last snapshot and the ransomware encrypting your systems, you would be able to recover your systems as they were when the last snapshot was taken, and continue business operations – minimizing the ransomware impact.
By planning ahead and having the right protections and systems in place beforehand, you can leave the headlines to others, leave ransomware attackers empty-handed, keep your systems up and running, and ensure your business continues operating. Learn more about how everRun can protect you from a ransomware attack, as well as how you can easily create a continuously available environment.
1 Understanding the Depth of the Global Ransomware Problem, Osterman Research, August 2016
2 How to Protect Your Networks from Ransomware, US Justice Department Computer Crime and Intellectual Property Section, 2017
3 Kaspersky Lab Report Confirm Ransomware Spiked in Q1 2017, Sean Michael Kerner, May 23, 2017
4 Story of the Year: The Ransomware Revolution, Kaspersky Security Bulletin 2016, Dec. 2016
5 Story of the Year: The Ransomware Revolution, Kaspersky Security Bulletin 2016, Dec. 2016
6 Ransomware Victims Urged to Report Infections to Federal Law Enforcement, FBI Public Service Announcement, Sept. 15, 2016
7 Understanding the Depth of the Global Ransomware Problem, Osterman Research, August 2016
Moviegoers know the danger of a tiny interruption in a building security system. In Oceans 11—and other fictional heists—just a brief flicker on the security center’s video monitor tells the audience that thieves have infiltrated the system to execute their nefarious plan.
In reality, today’s building security is more sophisticated. Large central security systems collect data from devices located across facilities: video, access control, temperature alerts, and building management data. Software analyzes metadata to recognize faces, detect noise or temperature anomalies, and warn of unusual data patterns. The security system can even control elevators, fire doors, and building egress in real time.
So system uptime is more important than ever. Imagine if White House security systems were down when one of the recent fence-jumpers breached the grounds. A mere minute or two would be enough to create havoc, perhaps tragedy. For similar reasons, utilities, communications, and other infrastructure operations are now regulated by Homeland Security or other governmental interests.
In a recent webinar for Security Today and Security Magazine, with over 100 mostly security vendor registrants, we asked about their need for uptime. 80-100% of their customer projects require high availability for their security systems.
There are several ways to achieve high availability. The majority rely on redundancy. If one server fails, a second takes its place. In older redundant architectures, this happens by activating a physical or virtual standby server, then recreating the previous environment and resuming operations. Unfortunately, this method relies on a failure to trigger recovery; hence uptime is always lost. And because of the complexity of the setup, IT staff must regularly audit and update the failover process to ensure that it will continue to work when needed.
Newer, simpler solutions build redundancy “under the hood.” These systems are designed so that the system, storage, and application act like a single machine. But inside, everything is replicated seamlessly: CPU, memory, network interfaces, and so on. Now if any element fails, other components are already functioning live to continue operation without pause or panic.
At Stratus, we offer “always-on” systems built with software and virtual servers, achieving the 99.999% uptime as earlier hardware-based methods. There are two great virtues of the software approach. First, the design retains its utter simplicity: Windows, applications, network, and devices treat the system as a single standard server. This reduces IT intervention during system updates. Second, because the virtual servers can reside on any Intel-based commodity server, you can create and expand using cost-efficient components.
For example, as explained in a case study, McCarran International Airport in Las Vegas manages their physical security system on a Stratus everRun solution. With a virtualized server environment, IT staff split the data center between two locations nearly a mile apart for instantaneous redundancy. In this heavily regulated industry, the airport has not had a single unplanned downtime incident in the first seven years of operation, avoiding TSA fines and adverse publicity.
For building security system integrators, this level of uptime brings a tangible advantage to customers. Panic calls due to component failure are eliminated. A new tier of availability can be offered. Service contracts are fulfilled more efficiently for customer and provider alike. And ultimately customers are more satisfied with the high reliability of the system.
In real life, as in the movies, continuous uptime of building security is essential. Stratus has extensive real-world experience providing always-on servers and expertise to both building security system providers and customers.
Availability Demands of Our Always-on World
The digitalization of our world, and the globalization of our economy, have truly transformed the business environment in which we all operate. To compete, your business needs to operate 24 hours a day, 7 days a week, 365 days a year. This means your IT systems must run 24/7/365, to support your always-on business.
Always-on, has become a global requirement that touches every part of our lives. It impacts your critical business applications, and your business can’t run without them. In manufacturing environments, it’s about maintaining productivity, and reducing waste. Retailers, need to ensure transaction processing systems are up and running, to maintain sales targets. In Building Security, premises and individuals need to be protected from internal, and external threats. In Public safety, lives are on the line. In Financial Services, the impact of system downtime is huge when you’re managing thousands of transactions per second. And in Healthcare, accessibility of patient records and compliance is crucial. You get the idea. None of these organizations can afford for their applications to be down. And, as the cost of downtime continues to rise, the dependence companies have on IT systems continues to increase.
It’s About More Than Just Protecting Against System Failures
Availability protection, however, isn’t limited to threats against servers, storage systems, virtual machines or applications. Unplanned downtime can result from localized power failures, building-wide problems or even the complete loss of a site or a facility. Such disasters, whether natural or caused by human error, can result in the total loss of a physical data center, potentially leaving your business unable to function for days or even weeks. In regulated industries, a site-wide problem can lead to data loss that risks compliance, adding significantly to your downtime costs. That’s why businesses in regulated industries like pharmaceuticals, manufacturing and financial services need protection solutions that ensure that all their data is safely replicated and remains available at all times.
Traditional Approaches to Site Protection
When protecting against localized failures by using geographic separation, if a disaster strikes in one location, the goal is that your applications and data are immediately available, up-to-date, and fully operational at another location. Disaster Recovery (DR) solutions enable a business or operation to switch over to a remote location for continuation of vital technology infrastructure and systems following a natural or human-induced disaster. There are a few things to be aware of regarding DR solutions.
- Failover may not be automatic, and may require human action.
- DR implementations have Recovery Time Objectives (RTO’s) – the maximum amount of time that a system, or application can be down after a failure or disaster – and Recovery Point Objectives (RPO’s) – the target maximum time period for which data might be lost.
- Data is not typically backed up continuously, but instead asynchronously, based on a schedule. This means that when the DR site is turned up, operations resume from the point of the last data back. So if, for example, your back-up is every 6 hours, the maximum period of data loss could be 6 hours.
While traditional DR solutions can provide long distance geographic separation for protection, that protection does incur a period of downtime, and some level of data loss.
Metro-Wide Availability Protection Prevents Downtime and Data Loss
The needs of our increasingly always-on world, are driving the race to zero for RTO and RPO. This demands something more than traditional DR can offer.
An alternative to traditional DR solutions is the use of synchronous replication between geographically separated sites. The network requirements for synchronous replication mean that these solutions are best suited to geographic separation typically within a metropolitan area. Such Metro-wide Availability Protection solutions can defend your critical business applications against localized power failures, building-wide problems or physical machines failures without downtime or data loss.
Unlike DR solutions that rely on asynchronous replication and which must therefore focus on recovery from downtime, Metro-wide Availability Protection with synchronous replication and can provide zero downtime for your applications during outages. In the event of a physical machine or site failure, a Metro-wide Availability Protection solution can automatically detect those failures and keep virtual machines running with no downtime. The difference between preventing downtime, rather than helping you merely recover from it, has a big impact on an organization’s revenues, costs, customer satisfaction and efficiency rates.
Metro-wide Availability Protection, with synchronous replication, provides geographic separation protection within a metropolitan area, without downtime or data loss in the event of a localized failure or disaster.
A Powerful Addition to Your Availability Toolkit
Metro-Wide Availability Protection is a powerful addition to your solutions for always-on systems and applications. Unlike typical disaster recovery solutions that are reactive, and which rely on back-up and restore, Metro-Wide Availability protection uses synchronous data replication between locations in a metro area, to allow for continuous operation in the event of a site failure in order to truly safeguard your business from major downtime due to potentially catastrophic events such as flooding and power outages.
For organizations across the industry spectrum, heightened awareness of both physical and cyber threats is driving increased investment in automation systems for building security. They are deploying more access control, more cameras, more alarms, more backup power systems, more logs and databases.
Yet these and other building automation and security systems are only effective as long as the servers that support them are up and running.
Approaches to building automation and security availability generally fall into three categories:
- Data backups and restores
- High availability (HA)
- Continuous availability (CA)
Which of these three general approaches is needed for your building security applications will depend on a range of factors.
First, however, it’s important to determine the state of your current security automation infrastructure. While your system architecture may be billed as “high availability,” this term is often used to describe a wide range of failover strategies—some more fault tolerant than others. In the event of a server failure, will there be a lapse in security? Can critical data be lost? Is failover automatic, or does it require intervention?
Assessing the potential vulnerabilities of your infrastructure can help you avoid a false sense of security that could come back to haunt you. This insight will also help you define your needs, guiding you toward the most appropriate availability strategies for your security environment.
So how much availability do you need? Obviously, deploying the highest level of fault tolerance for all of your security applications across the enterprise would be ideal. But the cost of such a strategy could be prohibitive. Moreover, not all security applications require the highest level of uptime.
For example, some applications may be deployed in a multi-tiered approach. With this arrangement, there could be a “master server” in a centralized location controlling a network of site servers, which regularly cache data back to the master server. In this scenario, you might configure the master server as FT but decide that HA is adequate for the site servers, given their workloads. It all depends on the criticality of each server’s function within the security automation architecture.
Carefully assessing your requirements for each security application and planning your infrastructure to provide the appropriate level of availability is the key to balancing your real-world needs with the realities of your budget.
Are your building security and automation systems ready for a disaster? Check out this Infographic containing key statistics from the Stratus 2015 Building Security and Automation Survey.
As building automation and security systems become increasingly reliant on server technology, ensuring the availability—or uptime—of the applications running on those servers is absolutely critical. But how much availability is “good enough”? And what’s the best way to achieve that level of availability?
To answer those questions, it’s important to understand the three basic approaches to server availability:
1. Data backups and restores:
Having basic backup, data-replication, and failover procedures in place is perhaps the most basic approach to server availability. This will help speed the restoration of an application and help preserve its data following a server failure. However, if backups are only occurring daily, significant amounts of data may be lost. At best, this approach delivers approximately 99 percent availability.
That sounds pretty good, but consider that it equates to an average of 87.5 hours of downtime per year—or more than 90 minutes of unplanned downtime per week. That might be good enough for a business application that is not mission critical, but it clearly falls short of the uptime requirements for building security and life-safety applications.
2. High availability (HA)
HA includes both hardware-based and software-based approaches to reducing downtime. HA clusters are systems combining two or more servers running with an identical configuration, using software to keep application data synchronized on all servers. When one fails, another server in the cluster takes over, ideally with little or no disruption. However, HA clusters can be complex to deploy and manage. And you will need to license software on all cluster servers, increasing costs.
HA software, on the other hand, is designed to detect evolving problems proactively and prevent downtime. It uses predictive analytics to automatically identify, report and handle faults before they cause an outage. The continuous monitoring that this software offers is an advantage over the cluster approach, which only responds after a failure has occurred. Moreover, as a software-based solution, it runs on low-cost commodity hardware.
HA generally provides from 99.95 percent to 99.99 percent (or “four nines”) uptime. On average, that means from 52 minutes to 4.5 hours of downtime per year—significantly better than basic backup strategies.
3. Fault-tolerance (FT)
Also called an “always-on” solution, FT’s goal is to reduce downtime to its lowest practical level. Again, this may be achieved either through sophisticated software or through specialized servers.
With a software approach, each application lives on two virtual machines with all data mirrored in real time. If one machine fails, the applications continue to run on the other machine with no interruption or data loss. If a single component fails, a healthy component from the second system takes over automatically.
FT software can also facilitate disaster recovery with multi-site capabilities. If, for example, one server is destroyed by fire or sprinklers, the machine at the other location will take over seamlessly. This software-based approach prevents data loss, is simple to configure and manage, requires no special IT skills, and delivers upwards of 99.999 percent availability (about one minute of downtime a year)—all on standard hardware.
FT server systems rely on specialized servers purpose-built to prevent failures from happening and integrate hardware, software and services for simplified management. They feature both redundant components and error-detection software running in a virtualized environment. This approach also delivers “five nines” availability, though the specialized hardware required does push up the capital cost.
Making server availability a cornerstone of your building security automation strategy pays dividends both in terms of day-to-day management and when situations arise that test your security. With the right strategy up front, your building’s security systems will be there when it really counts today and in the future. In today’s constantly changing, “always-on” world, that’s all the time.