Tintri analytics combines machine learning algorithms with VM and container-level granularity to predict storage and compute needs.
At Tintri, we think comprehensive analytics is a key success factor for an enterprise cloud. Simply put, Tintri Analytics helps you make better decisions. Our fundamental difference is the ability to provide analytics on every virtual machine (or container) in your footprint whether those VMs are running on our latest all-flash arrays or older systems.
Since its introduction, Tintri Analytics has provided insight into storage metrics at the VM level, but we recently enhanced the software to give you insights into compute as well as storage. Tintri Analytics uses sophisticated modelling and proven machine learning algorithms to better predict your storage and compute needs. It allows you to ask “what-if” questions and get immediate answers.
For example, you might want to know if your enterprise cloud infrastructure can support the addition of 100 new VDI seats, a dozen new web servers, or production and dev/test instances for a new software development cycle.
In this series, you’ve been learning how Tintri uses various machine learning algorithms to improve operations with capabilities like optimized VM placement and QoS that adapts automatically to complex I/O patterns. In this final post, we’ll shed some light on what’s going on behind the scenes in Tintri Analytics.
Useful predictions of storage performance require two things:
The approach that Tintri Analytics uses to predict performance needs is similar to that described in previous posts. We’ve worked to understand the available machine learning tools and what problems each one is suited for. We test those tools against real Tintri customer data to determine what works best in Tintri environments.
In this case, our prediction method involves a linear regression of a variable-length tail segment of the data set. We use a long and short-term predictor and merge the results of the two. A heuristics-based approach is used to decrease the time to solution.
Knowing storage performance requirements doesn’t do much good without understanding the performance capabilities of storage systems and the performance demands of different workloads. Tintri has gone to great lengths to model these as well. From a performance standpoint, this includes understanding:
Naturally, the answers to these questions are different for each Tintri storage array— extremely different for all-flash arrays versus hybrid arrays. Since many of you have both, Tintri Analytics has to be able to model the impact of moving a workload from flash to hybrid or vice versa.
Every few releases, we run all our appliances through a series of benchmarks and use the results to update the model so predictions are always as accurate as possible.
You might think that predicting storage capacity would be more straightforward than performance, but the effect of compression and deduplication makes the process a bit more complicated than it was before these technologies became ubiquitous.
We do a lot of modeling of compression and deduplication ratios because different data sets deduplicate and compress in dramatically different ways. Deduplication isn’t just a single workload problem, the combination of workloads on an array can have a big impact. Two workloads sharing the same array can result in a very different impact on deduplication than having the same two workloads on separate arrays.
In data centers with a mix of array models this gets even more complex because hybrid arrays don’t do deduplication and our oldest arrays don’t do compression.
The approach to behavior modeling is similar on the compute side, although things there are a little more straightforward. Instead of I/O performance and capacity, the relevant metrics for compute are CPU and RAM utilization. We use a similar ensemble of machine learning algorithms to forecast needs for these resources. As with our storage arrays, we model the capability of compute nodes to understand what their limits are, although that modeling is more standardized than it is for storage arrays.
As with storage, we model compute resource consumption for different workloads down to the level of the VM (or container). Tintri Analytics allows you to visualize all this performance information directly at the VM level.
So, what does this all mean for your data center operations? It means you can model your storage and compute needs—for organic growth and specific projects—based on the actual behavior of the workloads you are running. By modeling at VM or container granularity, Tintri delivers more accurate information that makes resource planning more straightforward. Because it can model potential changes to your environment and show you the impact in seconds, Tintri Analytics simplifies the rollout of new IT projects. You can see immediately if your existing infrastructure can support proposed changes and additions. And, if not, it can show you what additional servers and storage you’ll need to add.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.