In the weeks leading up to and during VMworld 2012, a new buzz word hit the enterprise IT infrastructure scene: the "Software-Defined Data Center." Was this just a case of building hype for the next big conference, or was there something more to it?
Now that VMworld is over and done with, I do believe that there is something to it. Let's dig in for a closer examination of the Software-Defined Data Center, and what some of the specific stepping-stones to that vision may look like.
The path has started
Virtualization is one of the most prolific changes for enterprise IT infrastructure in decades. Odds are, the vast majority of readers here have started on the virtualization journey already.
The key to virtualization in our data centers is abstraction. Suddenly, the paradigm of hardware and software has been disrupted, and the interactions between those components are taking place via the hypervisor-provided abstraction layer.
The software-defined data center utilizes many concepts that the adoption of virtualization introduced to the ecosystem:
One of the best analogs to the software-defined data center is the concept of enterprise wireless infrastructure. While there can be variations on the implementation, a controller will control a fleet of simple access points more often than not. When an access point powers on, it contacts the controller to download software updates, configurations and policies. Once running, the controller can then manage any number of components of the wireless activity. The actual access points are useless without the controller. The controller can provide a number of advanced functions across all deployed infrastructure while keeping administration at a centralized point.
We are seeing the adoption of the centralized management starting now:
The Software-Defined Datacenter is going to require a significant amount of component awareness in each area of the datacenter:
Simply put, everything needs to be aware of everything else and able to make dynamic decisions on functionality.
Device configurations local to any object in the software-defined data center are not going to cut it any more. Intelligent policies that reflect the role of the object in the environment will be necessary for the components to act properly.
For example, an application may be defined in the enterprise as the Email environment. The business sees this as mission critical. A separate application may be defined as a monitoring server. The business sees this as low importance. A policy can be applied to the virtual machine of "critical." This policy defines that the servers providing the application be allowed access to priority infrastructure, including higher QoS for network, higher tiers of storage, and more compute resources.
A similar policy may be defined for the "low" group. These servers have restricted access to resources, less network availability, and lower performing storage. These single policies will need to be shared across datacenter components. A single policy can/should control everything!
Perhaps one of the most difficult hurdles for the software-defined data center to overcome is corporate buy-in for the movement.
Currently, IT processes are predictable in the enterprise. Budgets can be built on a fairly definable process. Timeframes are predictable (for both hardware acquisition and service implementations). Plus, management frameworks like ITIL and MOF are actively being used. Adoption of virtualization has shaken up the environment enough. However, many of the same processes could be maintained.
Introduction of the software-defined data center is going to shake up the business side of the house. Suddenly, additional questions are going to need to be asked (i.e., prioritization of the business), and change management is going to need to be adjusted as changes will be coming by way of the policies being introduced and adjusted, and IT resource management will need to adjust capacity planning.
The other major component to the corporate buy in of software-defined data center is the shift in traditional IT silos. Existing design, administration, monitoring, and capacity planning roles and tasks are going to need to be adjusted significantly. Additionally, the traditional roles of network, server, and storage teams are going to need to blur together. Only the very low level tasks (rack, stack, connect, etc.) will need to remain. However, with policy implementations, the higher-level roles will start to blur.
As it stands now, the software-defined data center is just a direction. Existing technologies are paving the way for the concept (server virtualization and OpenFlow networking). Management of this environment is going to need to change to methodology that is common across all infrastructures. Finally, the business will have to be open to the change. Server virtualization has introduced a change in direction that businesses have adopted. So, making the move to a higher level of abstraction and change in management should not be out of this world.
A number of vendors have embraced the levels of integration and intelligence necessary to provide insight to and from the virtualization stack already. But, this is just the beginning of the work they need to do to enable the software-defined data center. Additional vendor support, increased functionality and increased communications between all components need to be developed to further empower implementations of the software-defined data center.
The software-defined data center is a reasonable long-term strategy to look forward to. And with the proper adjustments in technology, policy and business, it's definitely attainable.
Tintri all-flash storage and software controls each application automatically