I’ve been involved in shrinking downtime and ensuring business data availability for most of 27 years – so when Stratus offered me a chance to contribute to their blog, I couldn’t say no.
Whenever you consider an evolutionary, even revolutionary, improvement to an IT process, there are almost always options and ramifications. Choosing how you will embrace server virtualization in business-critical environments is one of those scenarios.
GOOD – Yay, you’re embracing server virtualization! You’re likely evolving beyond just physical server consolidation and are gaining benefits such as faster provisioning, better protection/recovery options, and flexible resource sharing.
That’s great. This is where everyone comes into the conversation. The question now is how you’ll move forward even further.
BETTER – Be smart about how you deploy and manage your virtualization infrastructure. For many organizations, this implementation will include “building blocks” made of hyperconverged appliances and/or converged infrastructure stacks. The goal is to provide a better underlying layer that will provide a solid foundation for the server virtualization that resides above it.
This is the same reason that most organizations do not build their own servers from PC components sold at Fry’s or Best Buy. Yes, you could acquire similar CPUs and hard drives, but if your business relies on the hardware, you start with commercial servers and storage – and perhaps later evolve to incorporate blades or CI/HC components.
But after you’ve put VMware ESX on each of the servers, you won’t care who the manufacturer is, will you? Well, you will – because insufficient servers will continually hinder what you were trying to achieve in the first place.
The more quality that you invest in the underlying “plumbing” of your infrastructure, the less you’ll focus on the components, and the more assured you’ll be that you’re unlocking the benefits of virtualization above it.
BEST – If the difference between Good and Better is superior underlying infrastructure nodes, then the difference between Better and Best is an infrastructure that not only is made of quality components, but also is durable and resilient. In much the same way that you chose commercial servers instead of self-built gear (even though virtualization abstracts those details away), consider choosing an even “better-than-commercial” virtualization host for the same reason – to get a more reliable underlying infrastructure for the VMs that your business is depending on.
With that in mind, here’s an ESG video that discusses the Stratus plus VMware solution stack:
You can check out all of ESG’s data protection coverage at http://www.esg-global.com/blog/author/jason-buffington
Jason Buffington (@JBuff) is the Principal Analyst at ESG focusing on all forms of data protection, preservation, and availability. He has actively deployed or consulted on data protection and storage solutions for 27 years, working at channel partners, various data protection software vendors, and Microsoft.
Unfortunately, I am leaving San Francisco a day early to be back home for some meetings. So, no Alabama Shakes or trip to At&T park for me and I also know I am missing some really good sessions. That said, to wrap up my thoughts, I wanted to focus in on resiliency and availability. This is a topic we at Stratus have a lot of experience in and I will try to put it in the context of what VMware offers. As is the case with any consumer products, you will see many options out there and each will have their pro’s and con’s.
This is not meant to be comparison per se, but more of an overview of the trade-offs associated with different situations of VMware availability.
- Basic Failover – If you have a need to protect an environment that can suffer some downtime and does not require the failover into another fault domain, there is the HA option. This is actually a really great option from VMware and is probably the most used feature of Vshpere. It’s in the box, easy to set up but it does require some additional HW though (3 servers and external storage are recommended). You also have to invest some effort in figuring out where the failover lands since that can cause performance issues or even impact other workloads.
- Advanced Failover – Things start to get a lot harder when you try to get fancy. And fancy includes Clouds or Hyper-converged scenarios. For example, what if you want to failover outside the fault domain or another site? What about the underlying data store? Do I need to use VSAN? What about DRS? As you may imagine when the boundaries to placement are lower but the complexity gets a lot higher, you may need extra features from VMware and it may involve more planning.
- Fault Tolerance – Interestingly enough, in VMware the simplest option to plan and set up is FT. It also provides the most robust protection of your workloads and data. You remove the need to plan out the placement and landing of the failed over VM. You take the advanced features out of the equation and it’s easy to set up. There are some capacity and performance limitations but for the right workload it’s a great approach.
So, VMware has a lot of different options and each has its own benefits and issues. The trade-offs are cost/simplicity/performance. All need to be considered when deploying your workloads.
At Stratus we have a slightly different approach. We have hardware that fully supports Vsphere, that is also fault-tolerant, is not complicated to set up and costs about the same as 2 servers (not 3 servers and an external array). Think of it as the simplicity and robustness of VMware FT without the costs.
After taking in some evo on Monday, let’s change gears to cloud which is another big reason I came out to VMworld this week. And while we at Stratus have been heavily focused on delivering resiliency to OpenStack workloads, it’s always good to check in on what VMware is doing. Especially because it’s pretty easy to add one of our VMware based ftServerse into a VMware cloud. Again, keeping with the theme of evolutionary technology innovation to support a robust strategy, you have to admit that VMware has a very robust cloud story. That said, here are some thoughts that once again reinforce that VMware is consumerizing the data center.
- VMware’s solution is a very complete hybrid cloud solution. Yes, it all hands together and you can build a very complete cloud, but it requires all VMware products. There is not much of an ecosystem or choice in place. But if you are OK with a single vendor solution, it’s worth a look.
- I was especially impressed with the vRealize automation capabilities. It’s a great toolset for making sense of and simplifying all of the pieces you would use. Of course, it’s not perfect but compared with a lot of other cloud managers and orchestrators it’s very good.
- I can see a future where HA operations span on prem and cloud. There has been a lot of emphasis on extending HA outside of the rack or server and if you leverage vCLoud many of the technical limitations (with the possible exception of latency to the cloud) can be overcome.
So, even though I am an open source believer (and ex-Red Hatter), I have to admit VMware is the first ready for mainstream IT cloud I have seen. But remember that comes with the usual gotchas that comes with a consumerized approach. You will give up some flexibility since VMware’s cloud does not have much of an ecosystem. It’s also expensive. Lastly, it’s a cloud so all statements about simplicity are relative to other clouds and not a rack full of virtualized servers.
It’s a lovely time to be back here in San Francisco and sitting in at the super galactic VMWorld 2015 show. Over the next few days I will share my thoughts on the show and how it ties in with Stratus. While this year wasn’t very heavy on tech breakthroughs, VMware offered us a glimpse of something far more interesting – a vision of where all of their stuff begins to hang together in a compelling way. As a company that partners with VMware, this is very interesting to Stratus. It’s also very interesting in that the vision goes a step beyond a “federation” of things that works together. VMware’s vision is more robust than that.
The feeling I am getting is VMware is focusing on making the entire operational experience seamless and simple. This extends to everything they are talking about, and after a while you start to get the feeling that VMware is really trying to leverage approaches associated with consumer technologies and apply them to the data center. This is not necessarily revolutionary thinking and many have suffered in their attempts to do this in the past so we will have to watch and see how this plays out for vmware. But unlike others who have tried before, VMware’s vision is a fully software defined and virtualized vision which seems to have more possibility for success.
So on that note, let’s start with why I came to VMworld in the first place – the evo products (Rail and Rack). Since last year when evo was announced I have been digesting the whole evo strategy and I have to say the message of a simple and easy to deploy converged appliance is compelling. We know that because Stratus has been demonstrating simplicity as a hallmark for all of our products (including our Vsphere products) for years. But some of VMware’s newer angles with respect to storage and networking are pretty forward thinking. It’s impressive.
But like all things great, it is not perfect – when you look at the cost to acquire this type of solution for example some of the shine comes off the story. The net is that evo is a strategic purchase completed at the executive level and it’s selection cements a company as a “VMware” shop. Therefore, it seemed a bit strange when what we at Stratus call Edge solutions were highlighted as a possible use case. Yes, it make sense on paper as you would likely see the greatest benefits from deploying at the Edge. However, the payoff for evo would require a semi significant consolidation of existing resources. Over time, I am guessing this will happen since we have seen VM density increasing at the Edge as it has in the data center (albeit at a slower rate). So, while technically exciting it may need some more time to reach critical mass.
That said, if you want a solid, very simple to deploy and manage Edge based solution you don’t need to go full on evo. You can take a peek at our very own ftServer. It’s deployed in thousands of sites already and hits the very same marks. And of course, it fully supports vSphere and we have a new version of it coming later this year.
Anyone who has been tracking the latest release of VMware vSphere® may have noticed that there is a newly re-engineered set of fault-tolerant capabilities. From our perspective here at Stratus, we see this as good news for the market in general. As more and more of the “easy” workloads have been virtualized, two things have happened:
- Enterprises have become very confident in virtualization technology and believe it can support even the most business critical workloads
- Virtualization is now the de-facto IT “platform” by adoption, so at some point emphasis needs to be placed on the remaining non-virtualized workloads which are more often than not the more business critical ones.
This is a set of beliefs we at Stratus have been extolling for years and we have thousands and thousands of business critical workloads supported today. As we’ve been engaging with our customers and partners over the years, we have established that there are a few characteristics that need to be understood when you plan to deploy fault-tolerant technology. If you are considering any of the fault-tolerant solutions out there – including ours – you need to ask these questions:
- What is the overall performance required?
To be honest the only way to achieve bare metal type performance with fault tolerance is with a hardware based solution like our ftServer® systems – which coincidentally provide high performance fault tolerance with VMware out of the box. That’s right, using our ftServer with VMware there is no performance penalty unlike VMware FT. Now this overhead is not limited to VMware – it will be the same for any software based solution, including our own everRun® Enterprise. However, in our own internal testing we have found that everRun’s performance is often twice that of VMware FT.
- Where is the solution going to be deployed?
One of the areas where Stratus excels is in field-based or edge deployments. When you consider deploying technology outside a data center (and potentially in multiple sites) there are a number of new requirements to be considered – serviceability, simplicity of deployment and administration and long support lifecycles. So, if you’re looking at a business critical application outside the data center, you’ll want to consider different options than you would in the datacenter.
- Where do you need more flexibility and choice?
This one is less about fault tolerance and more about vendor selection. Our ftServer technology is multi-hypervisor meaning you can get FT capabilities leveraging other hypervisors such as Hyper-V. Maybe you are committed to open source technology but want to stick with a preferred hardware vendor – that’s where our KVM based everRun software excels.
So, to sum up, we’d like to thank VMware for raising the visibility and profile of virtualized business critical workloads. For everyone else, understand that there are many options out there including two very solid and widely adopted solutions from Stratus.
It’s no secret that system downtime is bad for business. For one thing, it’s expensive. According to a 2012 Aberdeen Group report, the average cost of an hour of downtime is now $138,888 USD — up more than 30% from 2010. Given these rising costs, it’s no wonder that ensuring high availability of business-critical applications is becoming a top priority for companies of all sizes.
When it comes to choosing the right downtime protection, there are a couple of important things to keep in mind. First, deployment of applications on hypervisor software for server virtualization is increasing at a steady pace and is expected to continue until almost all applications are implemented on virtualized servers. As a result, you need to make sure that your downtime protection is able to support virtualized as well as non-virtualized applications. Second, with IT spending and headcount on the decline, downtime protection should be easy to install and maintain since there are fewer IT resources available to manage the assets.
Available downtime protection options range from adding no additional protection other than that offered by general-purpose servers to deploying applications on fault-tolerant hardware. Which option you choose will depend on the type of application in question. If the application is mission-critical, then you’ll need higher levels of protection. A strong segment of companies are choosing to protect each of their mission critical applications with fault-tolerant servers because they provide the highest availability, require no specialized IT skills, and are now priced within reach of even small to mid-size companies. Looking for guidance in choosing the right downtime protection for your “can’t fail” applications? Download the Aberdeen Group report to learn more.
In VMware’s defense, achieving full fault tolerance in software has always been a tough nut to crack. Sure, it can be done. But the end product is neither very useful nor efficient.
Enter Stratus ftServer, recognized by even VMware as the only full-function fault tolerant solution ready today for virtualizing Tier 1 applications. One ftServer – just one – delivers SMP support for critical virtual workload, maximizing the power and performance of every processor core.
While VMware has been struggling to deliver its second first-generation fault tolerant product, savvy IT managers have been using ftServer for virtualization for more than 8 years (the first one going back to 2004!). No performance-sapping overhead, no over provisioning, no configuration restrictions.
Check out the Stratus Uptime Meter, which shows uptime of the entire ftServer installed base around the world. Today it’s at 99.9999%. That’s just 31 seconds of average downtime. That’s not something you’ll be hearing from VMware about software FT anytime soon.
Learn more about how Stratus protects critical applications from downtime by downloading Virtualizing Tier 1 Applications: How to deliver superior quality of service and guaranteed uptime for business-critical VMware vSphere environments.
Hello from VMworld 2012. As expected, one of the hot topics at this year’s show is VMware Fault Tolerance. Here are my thoughts on Fault-Tolerant VMware prior to today’s VMware FT session.
When it comes to virtualization, most of the easy stuff has been done. Now IT wants to get more out of its investment. Business- and mission-critical applications are obvious targets. It’s not a question of can it be done, but whether can it be done safely. The pain of downtime and data loss raises a caution flag.
That’s where Stratus and Virtualization for Dummies come in. Despite the humorous title of the “Dummies” series, we’re very serious when it comes to helping industry professionals get up to speed on this very beneficial technology.
Virtualization technology is being widely applied today with excellent operational and financial results. In fact, it has become a matter of course for most businesses to work with some aspect of virtualization. Virtualization for Dummies provides you with a brief introduction to the subject, discusses cloud technology, and helps you understand the various options regarding availability. Knowing all this can help you create an action plan as you move forward with the next phase of your virtualization infrastructure.
Readers will learn:
- The basics of virtualization
- How organizations of all sizes can take advantage of virtualization
- How virtualization and cloud computing relate
- Why virtualization is as much for desktops as it is for servers
- How to ensure virtualized applications are always up and running
- The top ten things to consider when virtualizing business critical applications
Take your next steps in application virtualization by downloading Virtualization for Dummies.
The Future of Cloud Computing survey is up and running! The survey aims to capture trending perceptions, sentiments and future expectations of cloud computing from industry experts, users and vendors of cloud software, support and services. The survey measures how and to what extent the cloud is being used, growth catalysts, challenges, and impacts that cloud is having on IT and business operations. It covers areas such as current use, drivers, barriers, and future plans regarding cloud computing. Also included is profile information on the types of cloud services currently being used.
North Bridge Venture Partners and the 38 leading cloud organizationscollaborated to launch this 2nd annual Future of Cloud Computing Survey. The #FutureCloud Survey is open to the public until June 1. North Bridge will reveal results at the Cloud Leadership Dinner on June 19, and make them available online at www.northbridge.com/software.