When cloud became a new thing in the domain of analytics, and I asked our customers why they were interested, the overwhelming response was, “to save money!” There was some lip-service paid to other benefits like agility and universal availability, but the real driver was expected cost savings.
The cloud was going to make the
business intelligence (BI)/insights manager a hero in the eyes of the CFO, especially in times of ever-decreasing IT budgets. Shared infrastructure resources would bring scale economies and reduce wasted capacity. Plus, those web server guys had already saved buckets of money. It’s proven!
As BI/analytics teams began to evaluate and migrate their applications to the cloud however, they often discovered that costs quickly began to add up and, in most instances, IT managers found that they were spending
more on infrastructure in the cloud than they had been on-premises. What the early adopters did not realize is that applications cannot be simply forklifted into the cloud. There are sometimes architecture changes required—along with commercial and operational management changes—to start actually saving money.
The first thing to understand is that the cloud is consumed on a subscription basis. Like renting, you don’t buy or own the infrastructure. The question is: for what term are you renting it?
In the public cloud you can consume some resources on an hourly basis with zero up-front commitment, with some clouds offering per-minute or even per-second billing. Along with these short subscription terms is the ability to turn your resources on and off—and pay only for the hours, minutes, or even seconds that your application or system is consuming them. This is generally called on-demand pricing, where your usage is metered and billed.
In addition, most clouds (including public clouds) provide discounted pricing for extended subscription terms including month-to-month, 1-year and 3-year. Generally, the longer the term commitment, the lower the unit price. In this case there is no metering, and you are effectively paying to run the system or application 24x7 for the subscribed term.
A common first mistake when migrating a data warehouse or other enterprise analytics application to the cloud is to deploy using on-demand billing. This is the most expensive pricing, and costs can really add up if you’re running your system 24x7. Most BI/data warehouse/analytics applications run around the clock—constantly loading and integrating data, serving user queries, or generating reports. In this scenario, it’s much more cost-effective to subscribe to the longest term possible as this can produce the lowest cost over time. Discounts of 40 or 50 percent below on-demand rates can be achieved for long-term commitments.
A second mistake is not taking advantage of on-demand pricing when it makes sense. For applications or systems that can be turned off or disabled some of the time (e.g., test/dev, disaster recovery, and data labs environments), you can sometimes incur savings by using on-demand pricing if you turn it off regularly. However, operational diligence and/or investment in automation of your operational environment is often required to realize these savings. To determine if an application is a candidate for on-demand operation, you need to assess the percentage of time (compared to 24x7) that the application can be turned off. If the percentage of off-time is less than the term commitment discount, then there’s no need to undergo the operational effort to ensure the system is turned off when not used; you might as well commit to three years and leave the application running.
Here’s an example of how to think about it:
- Let’s say a system needs to be up from 7am to 6pm on Monday through Friday. This is 11 hours a day for 5 days a week, so 55 hours per week.
- 55 hours divided by 168 hours = 33% per week of consumption, which is the same as saying the system is not being used 67% of the time.
- Let’s also say the discount for a 3-year term subscription is 40%.
- Therefore, using on-demand pricing is more cost-effective than the 3-year term pricing for this system since 40% is less than 67%. Note that this assumes that the costs to ensure the system is turned off when not being used do not erode that difference.
- The breakeven point would be when the system is off for only 40% of the time (the term discount rate), which is the same as saying that the system would be running 60% of the time, or 100.8 hours per week (which is just over 20 hours per day for five days a week, or about 14.4 hours per day for seven days a week).
So, as you can see, there is some analysis to be performed up-front to determine the best commercial terms and operational requirements for each system, application and workload that you have.
Lastly—and what is considered the holy grail of the cloud—is elasticity, which can also deliver cost savings. The ability to scale up and, more importantly, scale back down can provide savings when a system needs to be operational 24x7. Usually, systems have predictable peaks and troughs in their resource utilization over time. If resources can be added for the peaks—and removed for the troughs—and those resources are only billed on-demand, then the aggregate resources over time can be reduced to be lower than the peak resources required. If the base-level resources can be contracted on a term commitment and the variable resources can be consumed at an on-demand rate, then total costs can be minimized.
The catch, though, is that sometimes scaling operations can be disruptive. In data management environments, scaling is often tricky and time consuming—so it’s important to balance the operational cost of scaling up and down against the cost savings that could be achieved. Also, sometimes the cloud vendor cannot provide the commercial flexibility to take full advantage of the elasticity required, or the on-demand resources may not be available. The bottom line is that it’s important to recognize that the IT domain of analytics and business intelligence has unique challenges in achieving cost savings in the cloud when compared to other IT domains that have come before it.
I encourage you to follow the conversation at #CloudExperts or #BuiltForTheCloud, and reach out to your Teradata account executive to learn more about the considerations, analysis, and planning required to help you achieve cost optimization when migrating analytic applications and systems to the cloud.
For the last 3 years Greg Taranto has been Teradata’s Cloud Specialist for Australia and New Zealand. In this role Greg helps Teradata’s customers achieve the best possible results for their analytics ecosystems by leveraging cloud alternatives for infrastructure, platform and services. In particular, Greg assists organisations that have had success with cloud in application domains in translating that success to the Analytics Cloud domain. Greg’s focus prior to cloud was with Teradata’s prospective customers (new business development). In that time Greg acquired an acute awareness of the need for agility in the analytics domain and how the cloud could play a major role in meeting that need. Prior to that Greg worked as a delivery consultant helping Teradata customers with data warehouse solution architecture and implementation, specializing in data modelling and integration.
Prior to Teradata Greg worked in the IT industry as an independent consultant focusing on large-scale systems integration projects for the Financial Services, Travel and Transportation and Telecommunications industries.
View all posts by Greg Taranto