The EC6000 comes with a plethora of new features that will make your data center sing. Let's deep dive into what makes it tick.
With affinity threading, Tintri minimizes the number of transfers and remote access memory, making efficient use of CPUs to reduce resource impact and keep costs down.
The Tintri EC6000 tops out at 645 TB and 320,000 IOPS in 2 RU, with VM isolation that lets 7,500 VMs access the performance reserves they need at all times.
Tintri reuses blocks of metadata by identifying write patterns, which speeds performance and optimizes capacity.
The EC6000’s numbers are evidence that Tintri continues to push boundaries to build the best platform for enterprise cloud (check out our overview blog for the breakdown). To deliver a leap in performance and VM density took some technical achievement. In this blog, we wanted to share three simple examples of what’s under the hood:
When designing the EC6000, a key objective was to maximize performance while managing cost—we want to give you the best bang for your buck. To do that, we needed to concentrate as much activity as possible on a single CPU.
Normally, your all-flash storage might leverage two CPUs indiscriminately—transferring memory from one CPU to the other and back again. We wanted to make more efficient use of resources, so we applied a technique called affinity threading.
With affinity threading we defined the tasks that were assigned to a specific CPU. That allows us to minimize the number of transfers and remote access memory. Let’s illustrate with a simple example.
Assume there are four sequential tasks: A, B, C, D. A sub-optimal approach would be to have CPU1 handle tasks A and C, and CPU2 handle tasks B and D. That will result in four transfers of memory between CPUs.
CPU1 “A” to CPU2 “B” = 1
CPU2 “B” to CPU1 “C” = 2
CPU1 “C” to CPU2 “D” = 3
CPU2 “D” back to CPU1 = 4
But with affinity threading we can assign tasks A & B to CPU1 and tasks C & D to CPU2. As a result, we reduce the total number of transfers from 4 to 2.
CPU1 “A” and then “B”
CPU1 “B” to CPU2 “C” and then “D” = 1
CPU2 “C” and then “D” back to CPU1 = 2
That’s a simple and concrete example of how we’re making more efficient use of CPUs so we can reduce impact on resources and help keep costs down for one of the highest performing all-flash platforms on the market.
Autonomous operation functions exactly as it sounds—a platform that does the work, so that you don’t have to. And, autonomous operation has always been a hallmark of Tintri’s platform.
We have described at length how Tintri puts every single one of your VMs or containers in its own lane to eliminate conflict over performance resources (no noisy neighbors). But to do that, we have to ensure a precise balance between capacity and performance.
For example, here’s a behavior we’ve seen in competitive deals for new customers. After presenting or piloting a higher end system to the prospect, a competitor will quote a lower performance system and stuff in more capacity. This meets the cost and capacity requirements of the prospect, but it sacrifices performance. As a result, as that device reaches a certain capacity, performance becomes a scarce resource and latency becomes an issue.
So, in establishing the capacity and performance parameters of the EC6000 series we were very thoughtful about the relationship between the two. There’s a lot of fancy math to ensure that when you fill up an EC6090 with up to 7500 VMs, every single one can access the performance reserves it needs at all times. The result is a platform that allows customers to add capacity drive-by-drive, topping out at 645 TB and 320,000 IOPS in just two rack units.
A third example of a technical achievement deep within the EC6000 series is our reuse of metadata.
Whenever you are writing data to a storage device you are leveraging blocks of metadata. Nearly all storage providers discard that metadata. In that case, the next time you write similar data to that device, it creates additional metadata.
This slows down performance and over time creates unnecessary capacity bulk—wasting your time, space and dollars.
So, Tintri found a way to reuse blocks of metadata by identifying patterns in writes. That helps speed performance and make better use of capacity. It’s typical Tintri: finding clever ways to squeeze the most out of our platform for the benefit of our customers.
Of course, the above is just a glimpse into the technical achievements we’ve made with the EC6000 series. And, since it shares the same OS as other Tintri series, you can scale out the EC6000 with your existing footprint—up to 40 PB and 480,000 VMs from one central console.
As our customers will attest to, it’s hard to fully comprehend how different Tintri is from alternatives until you see it in action. So, when you’re ready, please get in touch with us to schedule a demo. We’ll talk to you about affinity threading and autonomous operation, and show you how our customers use Tintri to slash management effort, troubleshoot in seconds and spin up thousands of VMs in minutes.
Vineet has more than 10 years of experience in building products in the storage and virtualization space. Currently at Tintri, he is in charge of Product Management for Tintri Global Center and Ec...more
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.