0 0

Tintri VMstore with Oracle RAC Best Practice Guide

Executive Summary

Oracle Real Application Clusters (RAC) extends the performance of an Oracle database by running a single database image across multiple servers. Physical deployments of Oracle RAC require multiple separate servers to host the individual RAC nodes. But when Oracle RAC is deployed with VMware each RAC node runs as a separate virtual machine (VM). Running Oracle RAC on VMware provides deployment and manageability options that are not available in physical deployments of RAC.

  • Move production Oracle RAC nodes between physical hosts – perform hardware upgrades and maintenance tasks on physical hardware without incurring downtime to individual RAC nodes.

  • Host multiple Oracle RAC VMs on one physical host - create test and development systems that mirror the production Oracle RAC environment but use a single physical host.

The power of Oracle RAC comes from multiple database servers (RAC nodes) sharing the same database files. But the power of RAC is only realized when the RAC nodes use a foundation of high- performance shared storage. Oracle RAC deployments in VMware have the additional performance challenge of requiring storage that is well suited for virtualized application workloads.

The Tintri VMstore is purpose-built to handle the storage I/O requirements of high performance database workloads. Tintri storage actively adapts to demanding workloads, enabling virtualization administrators and DBAs to focus on running the database instead of managing the storage infrastructure.

  • Predictable Performance – Tintri storage arrays combine flash storage with VM-aware technology to oversee the performance needs of individual VMs without requiring manual tuning or configuration. VMs achieve consistent high performance and fast response times.

  • Eliminate VM Contention – Tintri provides VMs with dedicated performance queues to prevent rogue VMs from suffocating performance. This prevents high pressure I/O tasks - such as backups and full table scans - from affecting other applications that share the VMstore.

  • Monitor Performance – detailed per-VM and per-vDisk analytics provide details on the performance of individual RAC nodes and database files. Drill down into host, network and storage latency metrics for each RAC node to quickly identify sources of performance bottlenecks that were heretofore hidden in the virtualization infrastructure.

  • Simplify Deployment – Tintri designed the VMstore so that IT and VM administrators with a working knowledge of VMware vSphereTM can deploy and use the system in 30 minutes or less.

    Oracle RAC is typically employed with mission critical databases. These databases must be provided with manageability tools that support the uptime and recoverability requirements of the individual RAC nodes. However, VMware places limitations on the vSphere manageability tools that can be used with Oracle RAC. For example, there is no support for VMware clones, Storage vMotion, and limited support for VMware snapshots. The Tintri VMstore does not share this limitation and features powerful per-VM manageability tools for creating snapshots, clones and replicas that are uniquely suited to Oracle RAC.

  • SnapVM – create point-in-time snapshot copies of individual RAC nodes. Create a baseline, or reference copy, of the RAC node before applying OS upgrades or Oracle patches.

  • CloneVM – clone existing RAC nodes to expand the Oracle RAC database cluster, or quickly replace a failed RAC node from a previously created snapshot.

  • ReplicateVM – use VM-level replication to copy an entire RAC cluster to another location for DR purposes. Create multiple RAC clusters for test, development and training purposes.

With its flash-based performance, powerful VM management tools, and simple deployment model, the Tintri VMstore is an ideal storage platform for running Oracle RAC databases on VMware.

Overview

Oracle makes transactional database software that is deployed by thousands of companies to store, manage, and derive business value from structured data. Oracle can be deployed as a stand-alone database or with multiple database servers in a Real Application Cluster (RAC).

This best practices paper focuses on the installation of Oracle RAC with Tintri and VMware. It details the steps necessary to add Tintri storage to an Oracle RAC node. Detailed instructions for installing Oracle RAC with VMware are provided in the white papers listed in the appendix of this document.

Tintri has published a best practices paper for deploying a single instance of the Oracle database with the Tintri VMstore. That guide contains detailed deployment and performance tuning recommendations for running Oracle with VMware on a Tintri VMstore. The performance tuning recommendations in that document also apply to individual Oracle RAC nodes.

Consolidated List of Practices

This section summarizes the recommended best practices for deploying a Tintri VMstore with Oracle RAC. Click on a recommendation to jump to the section of the document that corresponds to that specific best practice.


DO: Apply recommendations from vendor-specific best practice guides for RHEL 7, VMware, Oracle, and Tintri.

DO: Enable the “muti-writer” flag on vDisks used with Oracle ASM and Oracle RAC.

DO: Use Tintri SnapVM, CloneVM, and ReplicateVM tools with Oracle RAC VMs.

DO: Use either UDEV rules or ASMLib to identify disks used by Oracle ASM.

DO: If you decide to employ ASMLib, refer to the RedHat support page for details on acquiring and installing ASMLib for RedHat Enterprise Linux 7.

DO: Use VMware vMotion to move production Oracle RAC VMs between ESXi hosts. DO: Use Tintri SnapVM, CloneVM, and ReplicateVM technology with Oracle RAC VMs

DO: Tintri SyncVM can not be used with Oracle RAC VMs as they contain externally shared vDisks. This is by design and ensures the safety of the data in the shared vDisks.

DO: Use the VMstore Performance Dashboard to view the performance of the Oracle RAC nodes and troubleshoot latency breakdown across the infrastrucuture at a virtual disk and VM level.

DO: Use Oracle RMAN for Oracle RAC database backups and restores.


 

Intended Audience

This Best Practice Guide assists individuals who want to architect and deploy production Oracle RAC databases with VMware and Tintri VMstoreTM storage systems. Prior knowledge of Oracle RAC, RedHat Linux, virtualization, networking, and storage technology will help in understanding the concepts covered in this best practice paper.

Assumptions

This paper provides best practices for deploying Tintri VMstores with Oracle Database 12c RAC, Red Hat Enterprise Linux (RHEL), and VMware vSphere. This document is not intended to replace the vendor- specific best practices provided by Oracle, RHEL and VMware for their respective platforms. We recommend that you download and follow the best practice guides that have been provided by those companies. A list of best practice guides for Oracle, RHEL and VMware is documented in the References section of this document.

The reference system for these best practices employed Oracle 12c RAC database, RHEL 7, VMware vSphere 6, and Tintri OS 3.2.x. Cisco UCS blades ran the ESXi hosts, and the UCS blades were connected to the Tintri VMstore via 10GbE networking.

The skills needed to install Oracle RAC on VMware are the same skills needed to install Oracle RAC on physical hardware. Although no prior knowledge of Tintri is needed when installing or running Oracle database software, we recommend having familiarity with the installation and operation of Oracle Clusterware, ASM, and Oracle RAC when deploying Oracle RAC on VMware with Tintri.

This document assumes you are working with a fully configured, highly-available VMware Infrastructure. The configuration of hosts, networking, storage, and virtual infrastructure components is out of the scope of this document. Recommendations on the design, deployment, and management of virtual infrastructures powered by Tintri VMstore storage appliances is provided in the Tintri VMstore with VMware Best Practice Guide .

 

Vendor Guidelines

The following vendor guidelines were used in this deployment of Oracle RAC with VMware and Tintri. Every attempt should be made to follow the suggestions in these guides, while keeping in mind that these vendor-specific best practices were made without knowledge of your unique deployment architecture and database requirements.


DO: Apply recommendations from vendor-specific best practice guides for RHEL 7, VMware, Oracle, and Tintri. Tintri VMstore Best Practice Guide for Oracle



 Tintri VMstore with Oracle Best Practice Guide

Tintri has published a best practice guide for deploying a single instance of an Oracle database with Tintri, VMware, and RHEL. The best practices in this guide also apply to deployments of Oracle RAC.

Oracle RAC with RHEL 7 and VMware Deployment and Best Practices Guides

When deploying Oracle database as a virtual machine, care should be taken to adhere to best practice guides for deploying Oracle RAC with RHEL and VMware. The exact same best practices apply to the operating system whether the machine is running on physical hardware or virtualized in a VM. For this paper, the recommendations in the Red Hat Best Practices for Deploying Oracle RAC Database 12c with RHEL 7 were followed.

VMware vSphere 6.0 Best Practices Guides

VMware settings can affect the performance and scalability of your Oracle database. There is a wealth of information on VMware and Oracle performance in the VMware Best Practices guides.

 

Oracle RAC Deployment Architecture

When an Oracle RAC database is deployed with physical servers the deployment will include multiple nodes (servers) connected to each other by a private network. The database files are located on a shared storage subsystem where they can be accessed by all of the nodes. However, when Oracle RAC is deployed with VMware the RAC nodes are deployed as separate VMs instead of physical servers.

There are two types of deployment architectures used with Oracle RAC with VMware: Production and Dev/Test.

Oracle RAC Production Deployment Architecture

The first type of deployment architecture is used with production Oracle RAC databases where performance and availability are the primary goals. In this case individual Oracle RAC nodes are deployed as single VMs, each VM runs on a dedicated vSphere host. This deployment architecture ensures that all of the physical host resources are available to the individual RAC node at all times.

The production deployment architecture is shown in figure 1.

Deployment Architecture for a 3-Node Production Oracle RAC Database

Figure 1 – Deployment Architecture for a 3-Node Production Oracle RAC Database

 

Oracle RAC Dev/Test Deployment Architecture

The second type of Oracle RAC deployment architecture is used create Oracle RAC Clusters for training, test, development, and QA purposes where lower deployment costs is the primary goal. With this deployment of Oracle RAC the individual RAC nodes run as VMs but share the same vSphere host. Absolute performance is not the goal of this deployment so we can take advantage of VMware’s virtualization technology to run multiple VMs on a single ESXi host and minimize the deployment hardware and thus costs.

The Dev/Test deployment architecture is shown in figure 2.

Deployment Architecture for a 3-Node Test and Development Oracle RAC Database

Figure 2 – Deployment Architecture for a 3-Node Test and Development Oracle RAC Database

 

VMware Multi-Writer Flag and Oracle ASM

VMFS is a clustered file system that disables (by default) multiple virtual machines from opening and writing to the same virtual disk (.vmdk file). This prevents more than one virtual machine from inadvertently accessing the same .vmdk file.

However, Oracle RAC requires that multiple RAC nodes be able to access and update the same virtual disk. To that end Oracle RAC uses Oracle ASM to manage and control access to shared database files stored on the same virtual disks. When deploying Oracle RAC we must first disable the native VMFS file locking technology before installing and running Oracle ASM.

VMware has a KB article that describes how to use the “multi-writer” flag in vSphere to disable native VMFS file locking and allow individual vDisks to be shared by multiple virtual machines. Use the multi- writer flag with all disks that are used with Oracle ASM and Oracle RAC. The instructions for the use of the “multi-writer” flag are detailed in the VMware KB article 1034165.

VMware KB article 1034165 - Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag


DO: Enable the “muti-writer” flag on vDisks used with Oracle ASM and Oracle RAC.



Shared Hard Disk Architecture for ASM

Each Oracle RAC node connects with a set of Hard Disks that are managed by Oracle ASM. These Hard Disks are shared by all of the RAC nodes in the cluster. With VMware this is accomplished by creating Hard Disks in the first Oracle RAC node, setting the multi-writer flag on these disks, and then connecting the Hard Disks with the other RAC nodes. Figure 3 shows a simple 2 node Oracle RAC cluster that demonstrates the shared Hard Disk architecture.

  • Each RAC node uses local Hard Disks for OS files and for Oracle binaries.

  • The first RAC node contains Hard Disks that are shared with other RAC nodes. The Hard Disks and their data are stored in the Datastore folder containing the VM files for the first RAC node.

  • The Hard Disks in the second RAC node link to the Hard Disk files stored in the first RAC node.

  • A third RAC node would also link to the Hard Disk files stored in the first RAC node. Etc.

In this deployment architecture the location of the Hard Disk Files does not affect the capabilities of the other RAC nodes. The Hard Disk files can be accessed whether the first RAC node is powered on or not.

The configuration of the Oracle RAC nodes and the location of the Hard Disk files should be considered when the Tintri SnapVM, CloneVM, and ReplicateVM tools are used. Details on the use of these powerful data management tools with Oracle RAC is discussed in another section of this document.

Oracle RAC Nodes with Shared ASM Disks

Figure 3 –Oracle RAC Nodes with Shared ASM Disks

VMware vSphere Limitations of the Multi-Writer Flag

When the multi-writer flag is used with vDisks in a VM several vSphere functions are disabled. Review VMware KB article 1034165 for a complete list of Supported and Unsupported actions or features, but be aware that the following three major vSphere functions are disabled.

  • vSphere Snapshots (limited functionality)

  • vSphere Cloning

  • vSphere Storage vMotion

Tintri VM Supports the Multi-Writer Flag

When the multi-writer flag is used with vDisks the following Tintri VMstore functions are fully supported. 

  • SnapVM
  • CloneVM
  • ReplicateVM

DO: Use Tintri SnapVM, CloneVM, and ReplicateVM tools with Oracle RAC VMs.


 

Oracle ASM and the Tintri VMstore

Oracle Automatic Storage Management (ASM) is a volume manager and a file system that is used with unformatted disks. Oracle recommends the use of ASM with Oracle RAC as it provides specific volume management and file system functions that simplify the deployment of Oracle RAC.

Oracle ASM must be able to identify individual disk devices (vDisks) via persistent device names and it requires specific ownership and file permissions. Providing persistent device names with Linux is challenging as the names and the discovery order of Linux device files are dynamically assigned and can change across reboots of the server. This challenge applies to both physical and virtualized Linux.

There are two methods of providing persistent device names for Linux disks. This is accomplished via either udev rules or via the Oracle ASMLib library. Both of these methods of managing disk devices for ASM are supported for use with the Tintri VMstore.


DO: Use either UDEV rules or ASMLib to identify disks used by Oracle ASM.



Storage – UDEV rules

RHEL 7 native udev rules provide a reliable method for ASM to uniquely identify shared disks by referencing the SCSI ID of the disk.

Testing with Oracle RAC, RedHat Linux and the Tintri VMstore confirmed that udev rules work reliably with vDisks that are stored on the Tintri VMstore. A copy of the udev rules that were used to test this functionality are documented in Appendix B of this document.

Storage – ASMLib

Oracle ASMLib can be used as an alternative to udev rules when managing ASM disk devices. ASMLib uniquely identifies individual vDisks by adding a disk marker (tag) to the disk and by modifying the disk label. Note that Oracle ASM does not require ASMLib to operate and all ASM features are available with or without ASMLib.

ASMLib consists of the following components.

  • An open source (GPL) kernel module package: kmod-oracleasm

  • An open source (GPL) utilities package: oracleasm-support

  • A closed source (proprietary) library package: oracleasmlib

Details on how to acquire and install ASMLib for RedHat Enterprise Linux 7 are provided at the following RedHat support page, dated March 6, 2015.

Oracle ASMLib Availability and Support for RedHat Enterprise Linux


DO: If you decide to employ ASMLib, refer to the RedHat support page for details on acquiring and installing ASMLib for RedHat Enterprise Linux 7.



Deploying ASM with UDEV Rules

This section describes the steps necessary to deploy Oracle RAC on VMware with the Tintri VMstore.

Note: Appendix B includes examples of the udev rules file and scsi_id commands.

Step 1: Create the first RAC node

  1. Create the VM and configure the network. Reference the Oracle Databases on VMware RAC Deployment Guide for specific recommendations for the VM, Network, and vDisks.

  2. Create multiple Hard Disks (vDisks) for use by ASM. Use the thin Disk Provisioning option.

  3. Edit the VM Options > Advanced > Normal > Configuration Parameters... to expose the SCSI ID for all vDisks by adding the VMware option “disk.EnableUUID = TRUE”.

  4. Edit the VM Options > Advanced > Normal > Configuration Parameters... and add the multi-writer flag to each ASM disk with the VMware option “scsi sharing multi-writer”. Details on setting this option are shown in VMware KB article 1034165. See Figure 4 for an example of the Configuration Parameters.

  5. Note: although you can add this option using the Virtual Machine Properties dialog box, re- confirming that the option is set can require reviewing the VMX file for the VM. See Figure 5 for an example of a VMX file.

  6. Load the RHEL 7 OS. Reference the Deploying Oracle RAC Database 12c on RHEL 7 for Best Practices related to RHEL 7 and Oracle RAC.

  7. Using Linux, create a single partition on each disk. This step is required by ASM.

  8. Use the Linux command “scsi_id” to capture the SCSI id of the individual disks.

  9. Create Linux udev rules in the file /etc/udev/rules.d/99-oracle-asmdevices.rules.

  10. Execute “udevadm test /sys/classblock/sdc1” to confirm ID_SERIAL matches the SCSI ID.

Step 2: Create Additional RAC Nodes

  1. Use the Tintri VMstore CloneVM tool to clone the first RAC node.

  2. Remove and delete the ASM vDisks from the cloned VM.

  3. Add the ASM disks from the first RAC node to the current VM with the “Use an existing virtual disk” option. Browse the VMstore and select the appropriate vDisk from the folder containing the first RAC node.

  4. Confirm that the option “disk.EnableUUID = TRUE” is set.

  5. Confirm that the multi-writer flag is set for each ASM disk. Open the VMX file for the Virtual Machine and look for the “scsi sharing multi-writer” option. See Figure 5 for an example of a VMX file and the “scsi sharing multi-writer” option.

  6. Execute the Linux command “udevadm test /sys/classblock/sdc1” and compare the ID_SERIAL of the shared vDisks with the SCSI ID of the vDisks on the first RAC node. Each node should have the identical SCSI ID and UDEV results for the ASM disks.

  7. Repeat these steps for each new node in the RAC cluster.

Step 3: Create the Oracle ASM Disk Group and RAC database

  1. Download the Oracle binaries.

  2. Create an Oracle ASM disk group.

  3. Create the Oracle RAC database.

Configuration Parameters Showing the “multi-writer” Flag for SCSI 1:0 to 1:6


Figure 4 – Configuration Parameters Showing the “multi-writer” Flag for SCSI 1:0 to 1:6

A VMX File Showing that the “multi-writer” Flag is Set for SCSI 1:0 to 1:6

Figure 5 – A VMX File Showing that the “multi-writer” Flag is Set for SCSI 1:0 to 1:6

 

Deploying ASM with ASMLib

Step 1: Create the first RAC Node

  1. Create the VM and configure the network. Reference the Oracle Databases on VMware RAC Deployment Guide for specific recommendations for the VM, Network, and vDisks.

  2. Create multiple Hard Disks (vDisks) for use by ASM. Use the thin Disk Provisioning option.

  3. Edit the VM Options > Advanced > Normal > Configuration Parameters... and add the multi-writer flag to each ASM disk with the VMware option “scsi sharing multi-writer”. Details on setting this option are shown in VMware KB article 1034165. See Figure 4 for an example of the Configuration Parameters.

  4. Note: although you can add this option using the Virtual Machine Properties dialog box, re- confirming that the option is set can require reviewing the VMX file for the VM. See Figure 5 for an example of a VMX file.

  5. Load the RHEL 7 OS. Reference the Deploying Oracle RAC Database 12c on RHEL 7 for Best Practices related to RHEL 7 and Oracle RAC.

  6. Using Linux, create a single partition on each disk. This step is required by ASM.

Step 2: Create Additional RAC Nodes

  1. Use the Tintri VMstore CloneVM tool to clone the first RAC node.

  2. Remove and delete the ASM vDisks from the cloned VM.

  3. Add the ASM disks from the first RAC node to the current VM with the “Use an existing virtual disk” option. Browse the VMstore and select the appropriate vDisk from the folder containing the first RAC node.

  4. Confirm that the multi-writer flag is set for each ASM disk. Open the VMX file for the Virtual Machine and look for the “scsi sharing multi-writer” option. See Figure 5 for an example of a VMX file and the “scsi sharing multi-writer” option.

  5. Repeat these steps for each new node in the RAC cluster.

Step 3: Create the Oracle ASM Disk Group and RAC database

  1. Download the Oracle binaries.

  2. Download ASMLib for RedHat Enterprise Linux 7.

  3. Configure the Oracle ASMLib Library Driver.

  4. Create an Oracle ASM disk group.

  5. Create the Oracle RAC database.

Oracle RAC and Tintri Data Management Tools

Oracle RAC and VMware vMotion

VMware vMotion can be used to move a production Oracle RAC node from one ESXi host to another with no downtime. Tintri testing confirmed that Oracle RAC VMs running with a Tintri VMstore can move successfully from one ESXi host to another and back again. Furthermore, there were no false RAC node ejections nor RAC cluster fencing errors during the testing. The system was entirely stable during the vMotion tests.

NOTE: while we can use vMotion to move VMs from one vSphere host to another, vCenter will prevent us from executing a storage vMotion. VMware vCenter does not support storage vMotion for VMs with vDisks that have the Multi-Writer flag set to true. However, Tintri ReplicateVM can be used to replicate VMs and their vDisks to additional Tintri VMstores. Details on ReplicateVM are included in the section of this document titled “Oracle RAC and Tintri VM Management Tools”.

vMotion an Oracle RAC Node to another vSphere Host

Figure 6 –vMotion an Oracle RAC Node to another vSphere Host


DO: Use VMware vMotion to move production Oracle RAC VMs between ESXi hosts.


 

Oracle RAC and Tintri VM Management Tools

When the multi-writer flag is used with a VM several vSphere functions are disabled, including snapshots, cloning and Storage vMotion. The Tintri VMstore does not share this limitation and we can use the VMstore management tools to provide Snapshot, Cloning and Replication for Oracle RAC nodes.

It is important to understand the contents of the Oracle RAC VM when using the SnapVM, CloneVM and ReplicateVM tools. RAC VM nodes include Disk Files that are local to the VM and can include links to Disk Files from another VM. A VMstore VM snapshot will make a complete copy of the VM, which can include data from the Hard Disks as well as links to other Disk Files. Snapshots of links do not contain the data from the linked Disk Files, only the Datastore and folder name that comprise the link.

For example, figure 7 shows a 3 node Oracle RAC cluster. The ASM disks in RAC node 2 and RAC node 3 are actually links to the Hard Disks in RAC node 1. For ASM disks the Datastore and folder name in the Hard Disk properties should point to the first Oracle RAC node that was created. Figure 9 shows a VM properties dialog box with the Datastore and folder name embedded in a Disk File link.

ASM Disks in a Virtualized Oracle RAC Cluster

Figure 7 –ASM Disks in a Virtualized Oracle RAC Cluster

The following paragraphs describe the action of the Tintri VM management tools on two configurations of virtualized Oracle RAC nodes – what this paper calls “primary” and “secondary” RAC nodes. The primary node is the RAC node that was created first, the node for which all Disk Files are local to the VM. The secondary RAC nodes are the VMs which share the disks in the “primary” node. These RAC nodes have both local Disk Files (for their OS and for the Oracle binaries) and links to the shared ASM Hard Disk files. These Hard Disk files are located in the Datastore folder that contains the “primary” RAC node.

 

Tintri SnapVM

Use SnapVM to provide baseline and on-line backup copies of the RAC VM. Baseline snapshots are a useful safety net to capture the state of a VM when installing Oracle RAC software and when applying patches to an existing RAC nodes. On-line backup copies can be used to restore a failed RAC node.

  1. Snapshots of the primary Oracle RAC node includes all of the Hard Disks (Disk Files). Following VMware’s best practices for deploying Oracle RAC, all of the Hard Disks in the primary node are local to the VM and the Disk Files are included in the Tintri snapshot.

  2. Snapshots of a secondary Oracle RAC node include the local Disk Files and links to the shared ASM disks. The actual ASM disks (Disk Files) are stored in the folder with the primary Oracle RAC node. The data in the shared Disk Files will not be included with the snapshot.

Tintri CloneVM

Use CloneVM to quickly create copies of individual RAC nodes. Cloned RAC nodes can be used to expand a cluster by creating additional RAC nodes. A cloned RAC node can also be used to replace a RAC node that has failed.

  1. The name of the folder containing a cloned VM is selected when the clone is created. For example the name of the cloned VM can be appended with the phrase “-clone”, or any name you desire. Figure 8 shows the Tintri VMstore clone dialog box, and Figure 9 shows an example of the Virtual Machine Properties of a cloned VM.

  2. A clone of the primary Oracle RAC node includes all of the Hard Disks (Disk Files). The Hard Disks in the primary Oracle RAC node are local to the VM and all of the Disk Files will be cloned.

  3. A clone of a secondary Oracle RAC node includes the local Hard Disks (Disk Files) and links to the shared ASM disks. The actual ASM disks (Disk Files) are stored in the folder with the primary Oracle RAC node and will not be included in the clone.

  4. When cloning the RAC nodes, notice that the containing folder names change, and the links from the secondary to the primary RAC node point to the original Datastore and folder name. In each secondary RAC node use the Virtual Machine Properties dialog box to update links to the ASM disks (Disk Files) so they use the new folder name used by the new primary Oracle RAC node. Refer to Figure 9 for an example of the Datastore and folder name for a Hard Disk.

The CloneVM Dialog Box – Create new VMs

Figure 8 – The CloneVM Dialog Box – Create new VMs

The Virtual Machine Properties of a Cloned VM
 

Figure 9 – The Virtual Machine Properties of a Cloned VM

 

Tintri ReplicateVM

Use ReplicateVM to replicate individual RAC nodes to a VMstore in the same data center or to VMstores in remote data centers. Replicated RAC clusters can be part of a Disaster Recovery plan for Oracle RAC. ReplicateVM can also be used to create multiple copies of production Oracle RAC clusters for DBA training, QA, Test and Development purposes.

  1. A replica of the primary Oracle RAC node includes all of the Hard Disks (Disk Files). The Hard Disks in the primary Oracle RAC node are local to the VM and all of the Disk Files will be replicated.

  2. A replica of a secondary Oracle RAC node includes the local Hard Disks (Disk Files) and links to the shared ASM disks. The actual ASM disks (Disk Files) are stored in the folder with the primary Oracle RAC node and will not be included in the replica of the secondary RAC node.

  3. When starting the replicated Oracle RAC cluster it is recommended to start the primary Oracle RAC node first. When Oracle Clusterware and ASM are running then join the secondary nodes to the cluster.

  4. After replicating RAC nodes, double check the links that connect the Hard Disks in the secondary RAC node to the primary RAC node. These links will point to the original folder and Datastore name of the primary RAC node. In each secondary RAC node use the Virtual Machine Properties dialog box to update the links of the ASM disks (Disk Files) so they point to the correct Datastore name and the folder name used by the new primary Oracle RAC node. Refer to Figure 9 for an example of the Datastore and folder name for a shared Hard Disk.


DO: Use Tintri SnapVM, CloneVM, and ReplicateVM technology with Oracle RAC VMs



Tintri ReplicateVM Protection

Figure 10 – Tintri ReplicateVM Protection

 

Tintri SyncVM

Tintri SyncVM has the ability to restore individual Hard Disks to a point in time. However, the Tintri SyncVM tool has been designed to work with local Hard Disks and not work with VMs that have externally shared Hard Disks – disks that are linked to Disk Files in another VM. Externally shared Disk Files are owned by another VM and the results of restoring a shared Disk File could be unpredictable to both VMs. Thus the Tintri engineers have designed SyncVM to identify and protect VMs with shared external disks. Should you attempt to use SyncVM with an Oracle RAC VM the VMstore will display an error message stating that the VM contains external shared vDisks.


DO: Tintri SyncVM can not be used with Oracle RAC VMs as they contain externally shared vDisks. This is by design and ensures the safety of the data in the shared vDisks.



SyncVM Operation Failed Message

Figure 11 – SyncVM Operation Failed Message

 

Tintri VMstore Performance Dashboard

The Tintri VMstore is the first storage product designed to support virtual machines (VMs) without forcing the administrator to deal with low-level storage details. Tintri’s goal is to simplify the deployment and management of virtual machines. To that end the VMstore provides several ways to view the performance of the VMs on the VMstore.

  1. The Performance Dashboard provides a graphical view of the overall performance of the system, including IOPS, Throughput, Latency, Performance Reserves, and Physical Space consumed.

  2. The Virtual Machines (VMs) tab provides performance details for individual VMs, including IOPS, MBps, Reserves, Latency, and much more.

  3. The Virtual Machines (Virtual Disks) tab provides performance details on the VMDKs in each VM.

  4. The Virtual Machines (Snapshots) tab provides space and creation details for individual snapshots, including the Source VM, Created Date, Change MB, Cloned Count, and Hypervisor Type.

The system gathers performance statistics every 10 seconds, physical space information every 10 minutes, and keeps the data for seven days. When you first load a graph, statistics from the latest collection are displayed, thereafter the statistics are captured every 10 seconds.

Virtual Machines Tab with per-VM Performance Statistics Showing Latency (milliseconds)

Figure 12 – Virtual Machines Tab with per-VM Performance Statistics Showing Latency (milliseconds)

The Virtual Machines tab provides the most granular view of resources and entities on the VMstore. By reviewing the VM and Virtual Disk performance data provided by the Virtual Machines tab, you can easily access latency data that will help you pinpoint the source of performance issues experience by Oracle RAC nodes running in a VM.

For Oracle RAC Virtual Machines:

  • Each RAC VM will report the performance of its own local vDisks.

  • Only one of the VMs will actually display the performance data from the shared ASM virtual disks, usually the VM that was started first and claimed the shared Virtual Disks first.

  • When using the VMstore performance dashboard be sure to display the performance of all RAC nodes along with all of their vDisks. This ensures that the performance data from all of the shared vDisks can be seen.

DO: Use the VMstore Performance Dashboard to view the performance of the Oracle RAC nodes and troubleshoot latency breakdown across the infrastrucuture at a virtual disk and VM level.



Virtual Machines Tab with per-VMDK Performance Graphs showing Latency (milliseconds)

Figure 13 – Virtual Machines Tab with per-VMDK Performance Graphs showing Latency (milliseconds)

 

Using Oracle RMAN for Database Backups

The Oracle Recovery Manager (RMAN) tool is used to backup and restore Oracle RAC databases, tablespaces, and data files to offline disk or tape storage. It can be used to backup databases in their entirety, or to create incremental backups. DBAs have come to rely on its restore features, particularly the ability to validate backups and to repair block corruptions in the database.

Oracle recommends Recovery Manager (RMAN) and Fast Recovery Area (FRA) as the supported solution for creating and managing Oracle database backups. Tintri fully supports the RMAN tool for managing database backups.


DO: Use Oracle RMAN for Oracle RAC database backups and restores.



Conclusion

The Tintri VMstore offers performance and manageability features that make it an excellent choice for deploying Oracle RAC databases.

  • FlashFirst technology is specifically designed to handle the storage I/O requirements of high performance virtualized workloads.

  • Dedicated performance queues provides QoS for individual VMs and prevents rogue VMs from stealing performance.

  • Per-VM data management capabilities simplify the protection, cloning and replication of Oracle RAC clusters.

  • Drill down into host, network and storage latency with reports that allow VM admins to confidently identify the source of performance bottlenecks within the entire virtualization infrastructure.

We hope the information in this best practice guide helps you get the most from your Oracle RAC database and the Tintri VMstore.

 

References

Tintri Links

Oracle Links

RedHat Links

VMware Links

 

Appendix A – Oracle RAC Support Statements

As per VMware, here are some of the key facts about Oracle Support.

  • My Oracle Support document ID 249212.1 states that Oracle has an official support policy for virtualization on VMware vSphere.

  • Known issues – Oracle Support will accept customer support requests for Oracle products running on VMware virtual infrastructure if the reported problem is already known to Oracle. This is crucial—if you are running Database 9i, 10g, or another product with a long history, the odds are in your favor that Oracle has seen your problem before. If they have already seen it, they will accept it.

  • New issues – Oracle Support reserves the right to ask customers to prove that “new issues” attributed to Oracle are not a result of an application being virtualized. This is reasonable, as this is essentially the same policy that other ISVs use to some degree. It is key to look at the history of Oracle Support with regard to new issues.

  • Certification – VMware vSphere is a technology that resides under the certified Oracle stack (unlike other virtualization technologies that alter the OS and other elements of the stack). As a result, Oracle cannot certify VMware virtual infrastructure. However, VMware is no different in this regard from an x86 server—Oracle doesn’t certify Dell, HP, IBM, or Sun x86 servers.

VMware Support will accept accountability for any Oracle-related issue reported by a customer. By being accountable, VMware Support will drive the issue to resolution regardless of which vendor (VMware, Oracle, or others) is responsible for the resolution. In most cases, reported issues can be resolved via configuration changes, bug fixes, or feature enhancements by one of the involved vendors.

In the rare situation that another vendor is unable or unwilling to provide a satisfactory technical resolution, VMware Support will immediately notify the customer, assist in escalation, and explore other potential technical workarounds with the customer. VMware will also assist its customers with technical issues for other Oracle software products, besides the Oracle Database, and provide similar escalation assistance if needed.

Besides technical assistance, VMware Support will advocate on the customer’s behalf to:

  • Provide any relevant evidence that virtualization does not play a part in the Oracle product technical problem

  • Engage Oracle Support in resolving the customer’s technical issue, escalating management attention as appropriate

VMware recommends that customers take a logical approach and test Oracle’s support statement. Begin with pre-production systems, and as issues are encountered and SRs are filed, track Oracle’s response. VMware’s experience is that customers see no difference in the quality and timeliness of Oracle support’s response.

Tintri is a VMware Partner and we stand by VMware’s statement of support for Oracle. Tintri will assist customers with resolving issues with Tintri and VMware that are related to Oracle RAC.

 

Appendix B – Example udev Rules

This appendix includes the udev rules file and scripts that were used by Tintri in testing Oracle RAC with RedHat Linux and VMware.

Linux udev rules are available in Linux kernels 2.6 and beyond. The device manager udev parses rules to identify devices and to create device names. The udev daemon (udevd) reads the rules files at system startup and stores the rules in memory.

Rules files are located in the following directories:

/lib/udev/rules.d
Contains default rules files. Do not edit these files.

/etc/udev/rules.d/*.rules
Contains customized rules files. You can modify these files.

When creating udev rules for VMware the SCSI ID of the vDisk must be exposed. This is accomplished by adding the VMware option “disk.EnableUUID = TRUE” to the advanced options for each Oracle RAC VM.

The following script provides the SCSI ID of the disk devices.

/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdd
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sde
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdf
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdg
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdh
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdi
/usr/lib/udev/scsi_id --whitelisted --device=/dev/sdj

The output of the previous script lists the SCSI_ID for each device. This output is used in the “RESULT==” statement in the file “99-oracle-asmdevices.rules”.

36000c2993cbdd2ba4fb6fa2c9f14d0ec
36000c2906627526b0fe6be82c2e6bbd1
36000c299d41b79320b630113d19252c2
36000c296bcd9822ec86abd1d183a602a
36000c294e3754ee4a12d54e1e872ba72
36000c29368b9acf634926162fd7d0637
36000c2953aced295ff9002b4df1a88a3

The contents of the file “99-oracle-asmdevices.rules”. This file is located in the directory /etc/udev/rules.d/99-oracle-asmdevices.rules.

KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c2993cbdd2ba4fb6fa2c9f14d0ec", NAME="asmdisk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c2906627526b0fe6be82c2e6bbd1", NAME="asmdisk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c299d41b79320b630113d19252c2", NAME="asmdisk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c296bcd9822ec86abd1d183a602a", NAME="asmdisk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c294e3754ee4a12d54e1e872ba72", NAME="asmdisk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c29368b9acf634926162fd7d0637", NAME="fra", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --device=/dev/$parent", RESULT=="36000c2953aced295ff9002b4df1a88a3", NAME="redo", OWNER="oracle", GROUP="dba", MODE="0660"

Confirm that the udev rules work properly by executing the following commands and comparing the ID_SERIAL of the shared vDisks with the SCSI ID of the vDisks on the first RAC node.

udevadm test /sys/class/block/sdd1
udevadm test /sys/class/block/sde1
udevadm test /sys/class/block/sdf1
udevadm test /sys/class/block/sdg1
udevadm test /sys/class/block/sdh1
udevadm test /sys/class/block/sdi1
udevadm test /sys/class/block/sdj1

Temporary_css