Stratus Blog

Showing archives for category Virtualization

Connecting Legacy into an IIoT World

2.16.2017IA, IIoT, VirtualizationBy:  

Last week, I had the pleasure of sitting on a panel at the ARC Industry Forum in Orlando. The topic du jour was OT and IT convergence and a common thread amongst the discussion was how an organization can move to IIoT. Are there new architectures? Is IIoT a rip and replace only option? How can OT partner better with IT to meet IIoT goals? Overall, it was an engaging and excellent discussion. I came away from the session considering the connection between existing IA technologies and IIoT.

And here are some thoughts.

  1. Layer in the first piece to minimize disruption to legacy equipment. What I mean by this is that any time you make a significant architectural change a good first step is to layer in something around the existing architectural foundation vs going through a major replacement. If you assume that SCADA and Historian applications are at the core in the IA world, then you can look to either devices or analytics as a starting point. For example, one of our customers was able to layer a cloud based analytics layer over their existing SCADA infrastructure to add a lot of value. Other companies are introducing more and more end point devices into the mix first. But overall, take a look at your goals, find a pragmatic starting point and start out by adding one non-invasive layer. Once you have worked out the first layer, move on to the next layer. Often this is when you may see that the architecture core needs a boost which brings me to step 2.
  2. Virtualize that core infrastructure. Time and again we see underpowered, unreliable and out of date (“read insecure”) infrastructure supporting a SCADA layer. That may be all well and good in the old way of thinking but now is the time to consider an upgrade. In the world of IIoT that’s business critical stuff and that type of software needs a rock solid place to run – such as on a Stratus ftServer system. Once you have virtualized on a solid foundation it will be easier to manage and expand to other applications in the future.
  3. Respect your institutional knowledge but also look to the future. OT skills and knowledge are incredibly valuable but the introduction of new technologies at the Edge can be daunting. IT can help with the technology provided that the OT folks ensure that business needs are fulfilled. Near the top of that list is simplicity. Adding a lot of new technology for the sake of entrenched data center standards is a recipe for failure. Look for solutions that can thrive and survive at the edge without requiring a lot of IT support.

All in all, success will be determined by a smart scope and understanding the unique user requirements. When you break things down that way the challenge will be reduced greatly.

The Good, Better, and Best of Virtualization Hosting

10.4.2016Virtualization, vmwareBy:  

I’ve been involved in shrinking downtime and ensuring business data availability for most of 27 years – so when Stratus offered me a chance to contribute to their blog, I couldn’t say no.

Whenever you consider an evolutionary, even revolutionary, improvement to an IT process, there are almost always options and ramifications. Choosing how you will embrace server virtualization in business-critical environments is one of those scenarios.

GOOD – Yay, you’re embracing server virtualization! You’re likely evolving beyond just physical server consolidation and are gaining benefits such as faster provisioning, better protection/recovery options, and flexible resource sharing.

That’s great. This is where everyone comes into the conversation. The question now is how you’ll move forward even further.

BETTER – Be smart about how you deploy and manage your virtualization infrastructure. For many organizations, this implementation will include “building blocks” made of hyperconverged appliances and/or converged infrastructure stacks. The goal is to provide a better underlying layer that will provide a solid foundation for the server virtualization that resides above it.

This is the same reason that most organizations do not build their own servers from PC components sold at Fry’s or Best Buy. Yes, you could acquire similar CPUs and hard drives, but if your business relies on the hardware, you start with commercial servers and storage – and perhaps later evolve to incorporate blades or CI/HC components.

But after you’ve put VMware ESX on each of the servers, you won’t care who the manufacturer is, will you? Well, you will – because insufficient servers will continually hinder what you were trying to achieve in the first place.

The more quality that you invest in the underlying “plumbing” of your infrastructure, the less you’ll focus on the components, and the more assured you’ll be that you’re unlocking the benefits of virtualization above it.

BEST – If the difference between Good and Better is superior underlying infrastructure nodes, then the difference between Better and Best is an infrastructure that not only is made of quality components, but also is durable and resilient. In much the same way that you chose commercial servers instead of self-built gear (even though virtualization abstracts those details away), consider choosing an even “better-than-commercial” virtualization host for the same reason – to get a more reliable underlying infrastructure for the VMs that your business is depending on.

With that in mind, here’s an ESG video that discusses the Stratus plus VMware solution stack:

Get The Full ESG Report

You can check out all of ESG’s data protection coverage at

Jason Buffington (@JBuff) is the Principal Analyst at ESG focusing on all forms of data protection, preservation, and availability. He has actively deployed or consulted on data protection and storage solutions for 27 years, working at channel partners, various data protection software vendors, and Microsoft.

The path to modern ICS starts with virtualization

9.23.2016Industrial Automation, VirtualizationBy:  

I talk to a lot of people in the industrial automation world, and almost without exception they share the same challenge. They need to prevent unplanned downtime while preparing for the future, which includes evolving to the Industrial Internet of Things (IIoT), Industry 4.0, and smart factories.

This perspective was only reinforced when I asked attendees at a recent IndustryWeek webinar what concerned them most about unplanned downtime. In my online poll, the top three concerns were: potential revenue loss (54%), loss of visibility resulting in a safety violation (15%), and additional cost to run things manually (13%).

During the webinar, I laid out a strategy for addressing this challenge. My recommendation: modernize your industrial control systems (ICSs). The first step is virtualization, which consolidates multiple physical machines onto a single hardware system.

Virtualization eliminates the need to run individual control and automation applications on their own physical systems, each of which represents a potential single point of failure. Instead, your applications run on virtual machines (VMs) that share common computing resources across the underlying infrastructure. Each VM is securely partitioned to ensure data integrity, but you’ve eliminated all those points of failure. You also now have just one physical system to manage and support rather than many.

By now, you’ve probably recognized a new potential single point of failure. What happens if that one physical machine with all your virtualized control and automation applications goes down? It would be catastrophic, of course.

So the second most critical step is to protect your virtualized systems.

Here are four options:

  1. No protection. Yes, some people opt to take their chances because historically they’ve never had a system failure. This approach avoids any capital expense, but if recovery is required, it would take many hours if not days and be very expensive in terms of lost revenue and productivity.
  2. Hardware failover cluster. This is a common high-availability approach in IT that can reduce recovery time to minutes or hours. But clustering requires multiple physical systems, which defeats the purpose of virtualization by adding both cost and complexity.
  3. High-availability virtualization software. The approach is essentially the same as hardware clustering, but uses virtualization software to enable failover. You still need multiple systems. And while failover can be virtually instantaneous, it requires an application restart, which can take minutes to hours.
  4. Fault-tolerant server. This is an integrated system with built-in redundancy to prevent system failure. There is no need to fail over to another machine. It’s simply one physical machine that’s always on. Even if a component fails, the server, VMs and applications all keep running.

Plus, fault-tolerant servers are ready for the future of industrial automation today. That’s important because as you upgrade your ICS and move toward IIoT, you need a solution with enough horsepower to process massive amounts of data collected from across your operation. At Stratus, we’re seeing our customers achieve early wins in IIoT with things like predictive maintenance analytics to drastically reduce unplanned downtime. And that’s just the tip of the iceberg.

If you’re looking for the path of least resistance to prevent unplanned downtime and set a course for the future of industrial automation, then start by modernizing your ICS on fault-tolerant Stratus servers.

Capture Real Value from Virtualized SCADA/HMI Systems without Risk


Operations organizations aren’t always fans of IT. But since 2010, we’ve seen one of IT’s tried and true technology strategies increasingly adopted by operations: virtualization. There’s a good reason for that. They’re seeing powerful benefits from virtualizing their industrial automation (IA) applications.

First, virtualizing Supervisory Control and Data Acquisition (SCADA) and Human Machine Interface (HMI) systems provides huge cost savings by reducing hardware. Virtualization runs multiple applications side by side on one physical server. It also improves server utilization. Instead of 10 to 20% utilization typical in physical models, you can get more like 60 to 70% utilization by virtualizing.

Another advantage is downtime-free upgrades and patches. All you do is create a new VM (virtual machine), load and test the upgrade or patch, and swap out the old VM for the new one in production when everything checks out. This is particularly useful as you adopt new analytics applications, sensor data collection systems, and other elements of the Industrial Internet of Things (IIoT).

You may be wondering: with all my applications sitting on one server, what happens if that server goes down? The weakness with virtualization is that it creates a single point of failure. Instead of losing one application, you would lose them all.

There are several ways to address this problem:

  1. Run your virtualized applications on a standard server, but maintain a hot or cold standby in case a failure occurs. At best, you’re looking at several hours to get applications back into production; at worst, several days. Either way, expect some data loss.
  2. Deploy a server cluster and run your virtualized applications in parallel. Clusters offer effective failure recovery, but they introduce complexity and expense. You’ll need two servers, plus switches and additional networking, along with failover scripts. Even with all that, you still risk anywhere from a few to 30 minutes of downtime and potential data loss during failover.
  3. Get high availability (HA) from a virtualization vendor. Virtualization HA is similar to clusters, although it’s easier to deploy and automate. Regardless, you still have the added expense of another server, with at least several minutes of downtime and data loss while failing over.
  4. The preferred solution is to prevent failures, downtime, and data loss entirely by putting your virtualized applications on a fault-tolerant system. Here’s a real-world case in point:

A municipal water and wastewater treatment plant runs its virtualized SCADA system on the always-on Stratus ftServer. It’s a single-server solution that operates like any other industry-standard server but is designed specifically for critical applications. This ensures data availability so the plant can satisfy tight EPA regulations. Plus, it lowers costs through reduced hardware, software, and maintenance, helping the municipality cope with a decreasing tax base.

This is a prime example of how you can capture the benefits of virtualization for SCADA/HMI without any of the risks.

Find out more about virtualizing your HMI/SCADA without the risk by watching this Automation World webinar.

Don’t put up with downtime and complexity in your SCADA/HMI environment


When you look at the manufacturing sector from a global perspective, the competition truly never sleeps. It’s everywhere, all the time. That puts huge pressure on manufacturers to keep their industrial automation (IA) environments up and running.

In particular, Supervisory Control and Data Acquisition (SCADA), Human Machine Interface (HMI), Historians, and other IA systems are critical to meeting customer demands within tight schedules and managing inventory to maximize profits. They’re also central to product quality and regulatory compliance. Unplanned downtime can wreak havoc.

If your SCADA/HMI goes down, data collection stops and some data may be lost. You’re essentially running blind, which is a big problem. With the enhancements made to SCADA in recent years, you could need that data for everything from predictive maintenance to asset performance management and alarm response intelligence. Missing data also could put you in a tight spot during a compliance audit.

Depending on where downtime occurs, especially if it’s in a remote location or a site without skilled IT resources on staff, you could be down for days. That gets costly very quickly. In fact, a Stratus paper and packaging manufacturing customer calculated their cost for unplanned downtime at $33,000 per hour.

Traditional SCADA/HMI infrastructures also aren’t very efficient. If you follow the traditional approach of assigning one application per server, you could have numerous servers to manage and patch. It gets complex and time-consuming. Plus, your applications are probably only using five, ten, or 20% of each server’s capacity, which is just a lot of waste.

At Stratus, we recommend a different approach to running your critical IA systems.

First, virtualize. Virtualization has been around a long time in the IT world, but it’s just catching on in operations environments. Basically, it’s an abstraction layer that allows operating systems and applications to run above physical hardware. Instead of running your SCADA, HMI, or Historian each on an individual physical server, you run them on “virtual machines” side by side on the same hardware. That allows you to use 60 or 70% of the server and greatly reduce the number of physical systems needing management and maintenance in your IA environment.

Second, run your virtualized environment on a fault-tolerant server. When you have multiple applications residing on a single piece of hardware, uptime of that system is more important than ever. Stratus always-on servers ensure continuous availability of your applications, without a single point of failure and risk of data loss.

The paper and packaging manufacturer mentioned before saw the value of this approach. They moved their Manufacturing Execution System (MES) onto Stratus always-on servers and eliminated unplanned downtime while simplifying their infrastructure. Plus, they increased profitability thanks to a continuous operation without any line stoppages.

Unplanned downtime remains a key issue for industrial automation, particularly in the competitive manufacturing environment. The good news is that Stratus can help you eliminate the issue.

Automation World Webinar: Modernization to Prevent Unplanned Downtime Using Virtualization and Fault Tolerance

Read the Paper and Packaging Manufacturer Case Study

The Path to Smarter Buildings

6.24.2016Building Automation, High Availability, Smart Buildings, VirtualizationBy:  

The buildings we sit in or public spaces we visit (like airports) today are getting smarter all the time. A simple case in point is the lights that automatically turn on when you enter your office. A more advanced example is when your badge reader is tied to your company’s HR database and provides secure access to a room. A future example is when you can access a room with your badge (or phone) and that room’s lighting and climate is automatically set to meet your preferences. This future is real and a lot of technology is beginning to converge to usher it in. These advancements are all very exciting, but for those directly involved in creating smarter buildings, we should not underestimate the complexity involved. Here are some key considerations when charting your course towards a smarter building.

  1. Plan to consolidate your building technology– Right now every different building control (heating, power monitoring, video, access control) is on a separate application which is likely to be deployed on separate servers. This leads to a heavy footprint that is hard to manage and is likely costing you too much money. So, often the first step towards a smarter building is to virtualize your building’s software infrastructure. Stratus and our partners can provide you with the reliable foundation required for this with our recently announced Stratus Always-On Infrastructure for Smart Buildings.
  2. Take a close look at your needs for availability and fault tolerance – Once you have consolidated your solutions, you’ll invariably be forced to decide how and where to virtualize these applications. The easy answer is to just add the VMs into your existing data center. That’s a pretty good idea if your needs for availability and compliance are pretty basic (say in an office campus). But if you have critical areas to serve (such as access controls into a clinical environment or runway lighting controls at an airport) where no amount of downtime is acceptable, you may need a specialized solution deployed on site that ensures that failures of service won’t happen. And remember the more applications or building services you consolidate onto an infrastructure the more likely it needs fault tolerance.
  3. Learn what you can do to eliminate downtime with Application Availability Solutions from Stratus.

  4. Understand that the smart building infrastructure is pervasive and expanding– The internet of things is enabling the deployment of cheaper devices to help build smarter buildings. However, all of those devices need some degree of monitoring and visibility. This is why we have built everRun® Monitor powered by Sightline Assure® into our Always-On Infrastructure for Smart Buildings. It goes beyond the standard server based infrastructure and can monitor the entire gambit of smart building technology, giving building managers the insights they need to secure and operate their buildings more effectively.
  5. Get ready for analytics and compliance– A big part of the business case for smart buildings is the fact the new intelligence driven by the data that gets produced by the end point devices (sensors, cameras, badge readers), will help reduce costs and/or make buildings more secure. The application of analytics to these new building services will deliver those efficiencies and improvements provided that the data produced is consistent and available.

The smart buildings of the future are both realistic and beneficial. There are a lot of cost efficiencies to be gained, as well as safer spaces for people to work and visit. However, like many things it needs to start with a reliable technical foundation on which to build upon.

The ABCs of the Industrial Internet of Things

6.13.2016High Availability, IIoT, Industrial Automation, VirtualizationBy: The Industrial Internet of Things (IIoT) holds huge rewards for manufacturing companies from consumer goods makers to petrochemical firms to utilities. Companies, large and small, already are crediting IIoT with hard cost savings and advances in operational efficiency and product quality. This blog will answer frequent questions about IIoT we get from our industrial customers that you also might have.

What Is IIoT Anyway?

Sensor data, machine-to-machine communication, and automation systems have existed in industrial environments for years. IIoT builds on these technologies and bakes smart devices, machine learning, big data, and analytics into the mix.

With additional data sources and better intelligence and analytics embedded into the supply chain, you can adjust your industrial processes in real time. From there, you can expect tangible progress toward improved operational efficiency, return on assets, and profitability. That is the heart and soul of IIoT.

My Production Line Is Working Fine. Why Would I Change Things?

One of the biggest drags on inventory and order flow is unplanned downtime. For example, one hour of downtime for a large turbine powering a production line can cost a company up to $10,000 an hour. To avoid outages, manufacturers take production systems offline for periodic maintenance—needed or not. Not only does this get costly but even planned downtime is disruptive.

Alternatively, some manufacturers are using IIoT for predictive maintenance of factory line equipment. In these situations, a smart sensor attached to an assembly line motor monitors performance and reports on changes, such as temperature or vibration, which may signal failing parts. A proactive repair of the motor could avoid a complete failure and potentially weeks of downtime, costing millions of dollars in lost revenue. Or, it could shave seconds from the assembly line process and help the business fulfill orders and recognize revenue faster.

Such improvements translate into a compelling competitive advantage since the firms embracing IIoT turn out products faster and at a lower cost. That alone is a viable reason to embrace IIoT.

I’m Ready. How Do I Get Started?

Before getting started you need to ask yourself if your infrastructure is ready for IIoT.

Our recommended first step is to look at virtualization technologies to reduce your infrastructure and maintenance costs. The work effort involved in securing virtualized environments is less intensive than existing approaches and they are far easier to update and scale.

The good news is that by virtualizing you can continue running your existing automation systems to minimize your upfront investment. To ensure uninterrupted uptime, a fault-tolerant server that will keep other connected virtual servers running in the presence of a hardware problem is essential. Unlike clustered solutions, fault-tolerant systems are easier to manage and not subject to downtime when failover occurs.

Once you have your IIoT infrastructure in place, you can begin to enjoy the rewards of manufacturing processes that run faster, more cost-efficiently, and reliably than ever before.

Preparing for new applications that will come with IIoT (Industrial Internet of Things)

6.1.2016High Availability, IIoT, Industrial Automation, VirtualizationBy: There are several themes that we at Stratus hear repeatedly from our Industrial Automation customers and prospects. The current hot topic is IIoT, and although many companies have no immediate plans to implement it, everyone wants to know more about what it is and how they can prepare for its arrival. A perennial question we get is, “what can I do to prevent unplanned downtime?”, or the closely related “what can I do to prevent data loss when my server fails?”.  This is often followed by questions such as “I’m hearing that virtualization can simplify my HMI/SCADA/MES…. systems but won’t that take down everything if the server fails?” and “doesn’t virtualization mean I need a new complex system to prevent unplanned downtime and data loss?”

With new initiatives like IIoT and with the increasing threats to cyber-security, there is also no doubt that operational technologists and information technologists need to collaborate more deeply than ever before. It’s quite often a challenge, as perspectives and priorities can be quite different, but getting a productive conversation started can be a challenge.

Solving these types of problems and understanding how to approach these issues is, after all, why companies turn to Status, it’s what we do.

Not everyone is ready to engage in a direct discussion with Stratus, so we have asked Craig Resnick of the ARC Group to create a webinar, to help companies work through what is involved in applying technologies such as virtualization to eliminate unplanned downtime and prepare for new applications that will come with IIoT. If you are interested, register here

Where Stratus Plays

12.15.2015Data Center, Edge, Fault Tolerance, High Availability, VirtualizationBy:  

In the latest entry in our video series, Jason Andersen discusses some of the areas where our Stratus technology is deployed; places such as smart energy grids, and retail and manufacturing scenarios which are outside of the traditional data center. These spaces have been changing for years, and continue to change radically.

You can watch the video below for Jason’s thoughts on the future of the edge data center, which includes server virtualization, remote control access, greater convergence of technologies, and more.

Watch more videos by visiting our Stratus Technologies YouTube page.

Customers Who Left “Good Enough” for Stratus

11.30.2015Fault Tolerance, High Availability, VirtualizationBy:  

Many of our current customers once relied on a good enough availability solution before turning to Stratus. In the video below, Jason Andersen goes over some of the reasons why our customers made the switch, and why you should consider doing so as well.

We think you’ll be pleasantly surprised by the affordability and ease of use offered by Stratus’ Always-On hardware and software solutions.

Watch more videos by visiting our Stratus Technologies YouTube page.

Pageof 6