Tag Archives: Powershell

Hyper-V Resource Metering

Hyper-V Resource Metering

In this video we explore the new resource metering features in Hyper-V 2012 and 2012R2. We talk about the structure and uses of resource metering and then we use PowerShell to implement resource metering on a clustered configuration.

Resource Metering Overview

Resource metering enables administrators to collect data about the resource usage of a VM or pool of VMs. This data is presented as average utilization over a time period. The time period is the time sense the metering was enabled, or sense the last metering reset. This data can be used for creating chargeback or show back models. As you will soon see, the data from resource metering is relatively simple. It is not meant to replace an enterprise performance monitoring tool such as System Center Operations Manager (SCOM). Resource metering is something that can only be configured via the PowerShell interface.

Resource metering allows you to collect information about the VM’s CPU, memory, network and storage utilization. The data collected is stored in the virtual machine’s configuration file. This means that the resource metering data will stay with the VM when it is migrated to another host or cluster node.

A hierarchy of resource pools can be created to group VMs. The groups can represent any logical collection that is meaningful in your environment. A VM’s resource may only belong to one resource group at a time. The resource group gets its totals from the data stored in the VM’s configuration files. As such, if a host has no VMs on it, its resource groups will not report any values and will be disabled. When creating a resource pool a ‘pool type’ must be defined. While it is possible to configure different names for each of the resource pool types, it may be easier to name all of the pools for a particular logical collection with the same name as we have done in the video. This makes collecting the data much easier, but is not a requirement.

PowerShell Command to Manage Resource Metering

Enable/Disable Resource Metering
Enable-VMResourceMetering
Disable-VMResourceMetering
Configure VM Resources for Metering
Set-VMProcessor
Set-VMHardDiskDrive
Set-VMMemory
Set-VMNetworkAdapter
Add-VMNetworkAdapterAcl
Remove-VMNetworkAdapterAcl
Creating/Removing Resource Pools
New-VMResourcePool
Set-VMResourcePool
Remove-VMResourcePool
Measuring VMs and Pools
Measure-VM
Measure-VMResourcePool
Reset-VMResourceMetering

Resource Metering Demo!

In the video we did a demo where we created resource pools called Pool1 on cluster HVC0 for our 3 VMs (App-1,2,3). We then gathered pool metering information from all of the nodes and displayed the aggregated data.
Here is the script that we used in the video:
ClusterResourcePoolFunctions.PS1

PowerShell Code for Resource Metering

###Demo 1 functions - Demo 1 Enable VM and Measure
###Show Cluster Resource Pool
function Get-ClusterResourcePool {
    param ($Cluster = ".",$ResourcePool = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    Get-VMResourcePool -ComputerName $nodes -Name $ResourcePool
}
### Enable Cluster Resource Metering VM
function Enable-ClusterResourcePoolVM {
    param ($Cluster = ".", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $VMS = Get-VM -ComputerName $Nodes -VMName $VMFilter -ErrorAction SilentlyContinue
    if($VMs){ #VMs found!
        $VMs |Enable-VMResourceMetering
    } else {
        "No Vms Match Filter"
    }
}
### Measure Cluster Resource Pool VMs
function Measure-ClusterResourcePoolVM{
    param ($Cluster = ".", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    Measure-VM -ComputerName $nodes -Name $VMFilter -ErrorAction SilentlyContinue
}

#Demo 1 Enable VM and Measure
Get-ClusterResourcePool -Cluster HVC0
Enable-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App*
start-vm -ComputerName HVC0N1 -Name App-2
Get-ClusterResourcePool -Cluster HVC0
Measure-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App-*
stop-vm -ComputerName HVC0N1 -Name App-2

###Demo 2 functions

###Create Cluster Resource Pool
Function New-ClusterResourcePool {
    param ($Cluster = ".", $ResourcePool = "Pool1", $StoragePath = "NoneSupplied", $SwitchName="HVSwitch")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    New-VMResourcePool -ComputerName $Nodes -Name $ResourcePool -ResourcePoolType Ethernet,Processor,Memory
    If ($StoragePath -eq "NoneSupplied"){$StoragePath = (Get-ClusterSharedVolume -Cluster $cluster| select -ExpandProperty sharedVolumeInfo).friendlyvolumename}
    New-VMResourcePool -ComputerName $Nodes -Name $ResourcePool -ResourcePoolType VHD -Paths $StoragePath
    Get-VMSwitch -ComputerName $Nodes -Name $SwitchName| Add-VMSwitch -ResourcePoolName $ResourcePool
}
### Set Cluster Resource Pool VM Assignment
Function Set-ClusterResourcePoolVM {
    param ($Cluster = ".",$ResourcePool="Primordial", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $VMS = Get-VM -ComputerName $Nodes -VMName $VMFilter -ErrorAction SilentlyContinue
    if($VMs){ #VMs found!
        foreach ($VM in $VMS) {
            Write-Debug "Setting resource pool $ResourcePool on VM $($VM.name)"
            $VM|Set-VMProcessor -ResourcePoolName $ResourcePool
            $VM|Set-VMMemory -ResourcePoolName $ResourcePool
            $VM|Get-VMNetworkAdapter| Set-VMNetworkAdapter -ResourcePoolName $ResourcePool
            $VM|Get-VMNetworkAdapter| Connect-VMNetworkAdapter -UseAutomaticConnection #ENABLES LIVE MIGRATION BETWEEN HOSTS IN CLUSTER
            $VM|Get-VMHardDiskDrive| Set-VMHardDiskDrive -ResourcePoolName $ResourcePool
        }
    } else {#no vms found!
        "No VMs match filter"
    }
}
### Get Cluster Resource Pool VM Assignment
Function Get-ClusterResourcePoolVM {
    param ($Cluster = ".", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $VMS = Get-VM -ComputerName $Nodes -VMName $VMFilter -ErrorAction SilentlyContinue
    if($VMs){ #VMs found!
        foreach ($VM in $VMS) {
            Write-Debug "Getting resource pool info for VM $($VM.name)"
            $MyObj = ""| select VM, CPU, RAM, Disk, Network
            $MyObj.VM = $VM.name
            $Myobj.CPU = $vm| Get-VMProcessor|select -ExpandProperty ResourcePoolName
            $MyObj.RAM = $vm| Get-VMMemory|select -ExpandProperty ResourcePoolName
            $MyObj.Network = ($vm| Get-VMNetworkAdapter|select -ExpandProperty PoolName|select -Unique) -join ","
            $MyObj.Disk = ($vm| Get-VMHardDiskDrive|select -ExpandProperty PoolName|select -Unique) -join ","
            $MyObj
        }
    } else {#no vms found!
        "No VMs match filter"
    }
}
### Measure Cluster Resource Pool
function Measure-ClusterResourcePool {
    param ($Cluster = ".",$ResourcePool)
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $Pools = Get-VMResourcePool -ComputerName $Nodes | where-Object { $_.ResourceMeteringEnabled -eq "True"}|%{$_.name}| Select-Object -Unique
    foreach ($Pool in $Pools){
        $MyObj = ""| select PoolName,  AvgCPU, AvgRAM, TotalDisk, NetworkInbound, NetworkOutbound
        $MyObj.PoolName = $Pool
        $MyObj.AvgCPU = (Measure-VMResourcePool -ComputerName $nodes -name $pool -ResourcePoolType Processor `
                             -ErrorAction SilentlyContinue|Measure-Object -sum -Property AvgCPU).sum
        $MyObj.AvgRAM = (Measure-VMResourcePool -ComputerName $nodes -name $pool -ResourcePoolType Memory -ErrorAction SilentlyContinue|Measure-Object -sum -Property AvgRAM).sum
        $MyObj.TotalDisk = (Measure-VMResourcePool -ComputerName $nodes -name $pool -ResourcePoolType VHD -ErrorAction SilentlyContinue|Measure-Object -sum -Property TotalDisk).sum
        #Networking
        $networkGroup = Measure-VMResourcePool -computername $nodes -name $pool -ResourcePoolType Ethernet -ErrorAction SilentlyContinue | select -ExpandProperty NetworkMeteredTrafficReport|Group-Object -Property direction
        $MyObj.NetworkInbound  =  ($networkGroup|?{$_.name -eq "Inbound"}|select -ExpandProperty group|Measure-Object -Property TotalTraffic -Sum).sum
        $MyObj.NetworkOutbound  =  ($networkGroup|?{$_.name -eq "Outbound"}|select -ExpandProperty group|Measure-Object -Property TotalTraffic -Sum).sum
        $MyObj
    }
}

#Demo 2 Create, assign and measure pool
New-ClusterResourcePool -Cluster HVC0 -ResourcePool Pool1
Get-ClusterResourcePool -Cluster HVC0
Get-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App* | Format-Table -AutoSize
Set-ClusterResourcePoolVM -Cluster HVC0 -ResourcePool pool1 -VMFilter App*
Get-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App* | Format-Table -AutoSize
Start-VM -ComputerName (Get-ClusterNode -Cluster HVC0).name -Name App*
Measure-ClusterResourcePool -Cluster HVC0 -ResourcePool pool1

###Demo 3 functions
### Reset Cluster Resource Metering VM
function Reset-ClusterResourcePoolVM {
    param ($Cluster = ".", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $VMS = Get-VM -ComputerName $Nodes -VMName $VMFilter -ErrorAction SilentlyContinue
    if($VMs){ #VMs found!
        $VMs |Reset-VMResourceMetering
    } else {
        "No Vms Match Filter"
    }
}
### Reset Cluster Resource Metering
function Reset-ClusterResourcePool {
    param ($Cluster = ".",$ResourcePool="*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    Reset-VMResourceMetering -ComputerName $nodes -ResourcePoolName $ResourcePool -ErrorAction SilentlyContinue
}

#Demo 3 Reseting resource metering
Stop-VM -ComputerName (Get-ClusterNode -Cluster HVC0).name -Name App*
measure-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App*
Reset-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App-2
measure-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App*
Reset-ClusterResourcePool -Cluster HVC0 -ResourcePool pool1
measure-ClusterResourcePool -Cluster HVC0 -ResourcePool pool1

#Demo 4 functions
### Disable Cluster Resource Metering VM
function Disable-ClusterResourcePoolVM {
    param ($Cluster = ".", $VMFilter = "*")
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    $VMS = Get-VM -ComputerName $Nodes -VMName $VMFilter -ErrorAction SilentlyContinue
    if($VMs){ #VMs found!
        $VMs |Disable-VMResourceMetering
    } else {
        "No Vms Match Filter"
    }
}
###Remove Cluster Resource Pool
function Remove-ClusterResourcePool {
    param ($Cluster = ".",$ResourcePool)
    $Nodes = (Get-ClusterNode -Cluster $Cluster).name
    Remove-VMResourcePool -ComputerName $Nodes -Name $ResourcePool -ResourcePoolType Ethernet,Processor,Memory,VHD
}

#Demo 4 Remove resource pools
Set-ClusterResourcePoolVM -Cluster HVC0 -ResourcePool "Primordial" -VMFilter App*
Disable-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App*
Remove-ClusterResourcePool -Cluster HVC0 -ResourcePool pool1
Get-ClusterResourcePool -Cluster HVC0
Get-ClusterResourcePoolVM -Cluster HVC0 -VMFilter App*| Format-Table * -AutoSize

Resources

TechNet: Introduction to Resource Metering
MSDN: Hyper-V Resource Metering Overview

Hyper-V dVMQ

Deep Dive: Configuring dVMQ in Hyper-V

Virtual Machine Queue (VMQ) is a mechanism for mapping physical queues in a NIC to the virtual NIC in a VM partition (Parent or guest). This mapping makes the handling of network traffic more efficient. The increased efficiency results in less CPU time in the parent partition and reduced latency of network traffic. Also, without VMQ, traffic for a vSwitch on particular network interface is all handled by a single CPU core. This limits total throughput on a 10GB interface to ~2.5-4.5GBits/sec (results will depend on speed of core and nature of traffic). VMQ is especially helpful on workloads that process a large amount of traffic, such as backup or deployment servers. For dVMQ to work with RSS, the parent partition must be running Server 2012R2, otherwise RSS can not coexists with VMQ.

VMQs are a finite resource. A VMQ is allocated when a virtual machine is powered on. A queue will be assigned to each vNIC with VMQ enabled until all of the VMQs are exauted. That assignment will remain in place until the VM is powered off or migrated to another hyper-v node. If you have more vNICs in your environment than VMQs on your physical adapter then you should only enable VMQ on the vNICs that will be handling the most traffic.

Static VMQ

NovsVMQ
This image represents a Hyper-V host configured without VMQ in place. All network traffic for all the VMs is handled by a single core. With static VMQ (available in 2008R2), a VMQ is assigned to a specific CPU, and will stay on the CPU independent of workloads.

Dynamic VMQ

dVMQ
This image introduces both Dynamic Virtual Machine Queue (dVMQ) and load balancing mode for NIC teaming. These features are new to Server 2012. dVMQ is very similar to VMQ with one major difference. Dynamic VMQ will scale the number of CPU cores used to handle the VMQs across pool of CPU cores. When the network workloads are light all the dVMQs will be handled by a single CPU core, but as the network workload increases so too will the number of CPU cores used. With dVMQ in 2012 each queue can only use one CPU core at a time. Also, a vNIC can only have one VMQ assigned to it.

Sum Mode/Min Mode

In our video we recommend Hyper-V Port AND Switch Independent for a Load Balance Failover Team (LBFO) configuration on switches supporting Hyper-V workloads. This load balancing mode and teaming mode will put the vSwitch in Sum mode. This mean that we will have the sum of all the VMQs from the NICs in the LBFO team. In the case of the left image above we have 2 NICs in the team each with 2 VMQs. With the team in Sum mode we have a total of 4 VMQs to allocate to vNICs. If we use AddressHash OR Switch Dependent configuration on the team, it will be placed in Min mode. In the right image above, the same hardware now only offers 2 VMQs for vNICs. This is because inbound traffic may come in on any network interface on the team for a particular vNIC. This may be a desirable configuration if you have very few vNICs on a vSwitch (vNIC count equal or less than the fewest VMQs on any NIC in the team).

Virtual Receive Side Scaling

Server 2012R2 introduces Virtual Receive Side Scaling (vRSS). This feature works with VMQ to distribute the CPU workload of recive traffic across multiple CPU cores in the VM. This effectively eliminates the CPU core bottleneck we experience with a single vNIC. To take full advantage of this feature both the host and guest need to be 2012R2. Enabling vRSS does come with the cost of extra CPU load in the VM and parent partition. For this reason, vRSS should only be enabled on vNICs that will be exceeding 2.5GBits/Sec on a regular basis.

Base and Max CPU

Base and Max CPU properties are used to configure what CPU cores will be used by VMQ.  The base processor is the first core in the group and max is the size of the group.  For example, Hyper Threading disabled and base=2 max=4 would assign cores 2-5.  VMQ will not leverage hyper threading (HT).  If HT is enabled then only even numbered cores will be used.  For example: HT enabled, base=2 max=4 would assign even numbered cores 2-8.  When ever possible it is best to choose a base value greater than 0 (or 1 in case of HT).  Creating CPU bottlenecks on core 0 has caused performance issues in some implementations.

Requirements and Configuration for VMQ

The following are required to use VMQ:
-Server 2008R2 (Static VMQ), Server 2012(dVMQ), Server 2012R2 (dVMQ+vRSS)
-Physical NICs must support VMQ
-BelowTenGigVmqEnabled = 1 for 1GB NICs (10GB NICs are auto enabled)
Follow these steps from the video to enable VMQ
0. Enable VMQ for 1GB if required
–HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VMSMP\Parameters\BelowTenGigVmqEnabled = 1
1. Install the latest NIC driver/firmware
2. Enable VMQ in the driver for the NIC (Process will vary by NIC model and manufacturer)
3. Determine values for Base and Max CPU based on hardware configuration
4. Assign values for Base and Max CPU
5. Configure VMs

Recommendations for VMQ/dVQM/vRSS

-Use Switch Independent + Hyper-V Port to ensure the vSwitch is in SUM mode
-Always assign a base CPU other than CPU0 to ensure best performance and resiliency
-Remember when assigning Base/Max CPU using HyperThreading only even numbered cores are used
-Multiplexor Adaptors will show Base:Max of 0:0, do not change this item
-Configure Base and Max CPU for each NIC with as little overlap as possible
-Only assign Max Processor values of 1,2,4,8
–It is ok to have max processor extend past the last CPU core or number of VMQs on the NIC

Troubleshooting VMQ

Here are a few things we have seen in the field when supporting VMQ

  • Most issues with VMQ are resolved by updating to the latest version of the NIC driver!
  • VMQ appears enabled but is showing 0 queues. This may even only impact a single port on a multiport NIC.
    • *RssOrVmqPreference = 1 Must be set on all NICs that will leverage VMQ (Follow this Link for more information)
    •  HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318\[GUID of NIC Port]\*RssOrVmqPreference = 1

If you have an issue that you have experienced in your environment not listed here let me know so I can add it to the list!

PowerShell Code to Auto Configure VMQ Base/Max Processor

ConfigureVMQ.ps1

$Teams = Get-NetLbfoTeam
$proc = Get-WmiObject -Class win32_processor
$cores = $proc| Measure-Object -Property NumberOfCores -Sum|select -ExpandProperty sum
$LPs = $proc| Measure-Object -Property NumberOfLogicalProcessors -Sum|select -ExpandProperty sum
$HT = if($cores -eq $LPs){$false}else{$true}
function SetVMQsettings ($NIC, $base,$max){
    #$nic|Set-NetAdapterVmq -BaseProcessorNumber $base -MaxProcessors $max
    Write-Host "$($nic.name):: Proc:$base, Max:$max"
}
#$LPs = 4 #testing var
#$ht = $false #testing var
foreach ($team in $teams){
	$VmqAdapters = Get-NetAdapterVmq -name ($team.members)
	#Create settings
	$VMQindex = 0
	Foreach($VmqAdapter in $VmqAdapterS){
		$VmqAdapterVMQs =$VmqAdapter.NumberOfReceiveQueues
        #$VmqAdapterVMQs = 2 #testing var
		if ($VMQindex -eq 0){#first team nic
			#base proc 1+HT and max eq to num remaining cores, num queues, whatever is less
			$base = 1+[int]$ht
			$max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
            SetVMQsettings -nic $VmqAdapter -base $base -max $max
           }
        else{#all other nics exclusing first team nic
            if ($VmqAdapterVMQs -gt ($LPs/(1+$HT))){ #queues exceeds core count, so just start at base+1
                $base = 1+[int]$ht
                $max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
                SetVMQsettings -nic $VmqAdapter -base $base -max $max
            }
            else{ #cores greater than Queues so ballancing is possible
                $StepSize = [int]((($LPs/(1+$HT))-$VmqAdapterVMQs-1)/($VmqAdapters.count-1))*$VMQindex+1
                $base = $StepSize * (1+$HT)
                $max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
                SetVMQsettings -nic $VmqAdapter -base $base -max $max
            }
        }
		$VMQindex++
	}
}

Resources

TechNet Networking Blog: Deep Dive VMQ Part 1, 2, 3

Building a Hyper-V Cluster – Creating Virtual Machines – Part 5/5

Creating and Managing VMs

In this video we will create highly available VMs.  First we create the virtual machines in the GUI then in PowerShell.

When creating a VM, ensure that you always check the box to store the virtual machine in a different location.  If you don’t do this, then the VM’s configuration file and VHD files will be put in the Hyper-V default location.  This is bad because it will be hard to tell what VHDs are associated configuration files.  If you check the store virtual machine in a different location check box all of the VM’s components will be stored in a single folder.  This will make your management life much easier!  Also, if the VM will be part of the cluster, be sure to create and manage the VM in failover cluster manager rather than Hyper-V manager.

Store the virtual machine in a different location

PowerShell PowerShell Code

#Create a new VM
New-VM -Name JasonVM -Path c:\ClusterStorage\CSV1

#Add the VM to the cluster so it becomes highly available
Add-ClusterVirtualMachineRole -VMName JasonVM

#Start the VM and live migrate it to another cluster node
Start-ClusterGroup -Name JasonVM
Move-ClusterVirtualMachineRole -Name JasonVM

#Create and remove VM Snapshot/Checkpoints
Checkpoint-VM -Name JasonVM
Get-VM -Name JasonVM| Get-VMSnapshot
Get-VM -Name JasonVM| Get-VMSnapshot| Remove-VMSnapshot

#Shut down the VM
Stop-VM -Name JasonVM

#List the Hyper-V and Failover Clustering commands
Get-Command -Module hyper-v, failoverclusters

Resources

MSDN: Virtual Machine Live Migration Overview
TechNet:Windows PowerShell: Create Hyper-V virtual machines

Check out the other post in this series!

Building a Hyper-V Cluster – Building The Hyper-V Cluster – Part 4/5

In this video we validate our cluster node configuration and then create the cluster. Once the cluster is formed, we update the names of various cluster components to match their function. Finally we set up a CSV on the cluster.

In Server 2012R2 the cluster validation well help to ensure that the nodes in the cluster are configured identically and correctly. By passing the cluster validation and using hardware certified for 2012R2, we are ensuring our cluster will be in a supported configuration.

When we form the cluster we only need two items, the name and IP of the cluster. The name we specify will be used to create a computer account in active directory. If the using running the new-cluster command does not have rights to create computer accounts in AD the account may be prestaged. If this is done, the account should be disabled and the user should have full permission on the account.

PowerShell Command

Test-Cluster -node 2k12r2-node1,2k12r2-node2
New-Cluster -Name HVC1 -node 2k12r2-node1,2k12r2-node2 -staticAddress 192.168.0.100

#Update Cluster Network Names to Match Function
(Get-ClusterNetwork| ?{$_.Address -eq "192.168.1.0"}).name = "Managment"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.1.0"}).name = "iSCSI"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.2.0"}).name = "Cluster1"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.3.0"}).name = "Cluster2"

#Update Cluster Disk Names to Match Function
(Get-ClusterGroup -Name "Cluster group"| Get-ClusterResource |?{$_.ResourceType -eq "Physical Disk"}).name = "Witness"
(Get-ClusterGroup "available storage"| Get-ClusterResource).name = "CSV1"

#Configure the CSV
Get-ClusterResource -Name "CSV1"| Add-ClusterSharedVolume
Rename-Item -name C:\ClusterStorage\Volume1 -NewName C:\ClusterStorage\CSV1

Cluster Network Roles

In our example we did not need to change anything other than the cluster network’s name. This is because the excellent work the Windows Failover Clustering team has done on the cluster creation wizard. Automatically each cluster network will be configured with the correct cluster role and metric. These setting can be used to fine tune cluster network behavior, but in most cases are best left in default configuration.
We can use Get-ClusterNetwork to inspect the values for role and metric:
PS C:\> Get-ClusterNetwork -Cluster HVC0 | Format-Table Name, role, Metric, AutoMetric -AutoSize
Name Role Metric AutoMetric
—- —- —— ———-
Cluster1 1 30384 True
Cluster2 1 39841 True
iSCSI 0 79842 True
Management 3 79841 True

We will connect to the cluster network name using the role 3 network. The cluster networks are role 1 and will be used for cluster communications. iSCSI communication was detected on the storage network so it was created as a role 1 network, blocked for use by the cluster.

We will do a deep dive on cluster networks in another video.

Check out the other post in this series!

Building a Hyper-V Cluster – iSCSI Storage – Part 3/5

Configuring iSCSI storage for a Hyper-V Cluster

In this video we use iSCSI target server built in to Server 2012R2 to present shared storage to our cluster nodes.

Install and Configure iSCSI Target

We must first install the FS-iSCSITarget-Server feature. Once this is installed we will create a target on our storage server. Next we will create virtual disks for the witness disk and CSV. These virtual disks will be attached to the target and presented to our cluster nodes as LUNs. Finally, we will configure the target to allow access from the IQNs of our hyper-v host nodes.  We can discover the IQN of the hyper-v hosts by running the command: (Get-InitiatorPort).NodeAddress on the cluster nodes.

 PowerShell Commands

#Install target server
Install-WindowsFeature -Name FS-iSCSITarget-Server, iSCSITarget-VSS-VDS -IncludeManagementTools -Restart
#create target
New-IscsiServerTarget -TargetName HyperVCluster
New-IscsiVirtualDisk -Path c:\HVC1-W.vhdx -SizeBytes 1GB
New-IscsiVirtualDisk -Path c:\HVC1-CSV.vhdx -SizeBytes 50GB
Add-IscsiVirtualDiskTargetMapping -TargetName HyperVCluster -Path C:\HVC1-W.vhdx
Add-IscsiVirtualDiskTargetMapping -TargetName HyperVCluster -Path C:\HVC1-CSV.vhdx
#(Get-InitiatorPort).NodeAddress
#Allow nodes to access target LUNs
Set-IscsiServerTarget -TargetName HyperVCluster -InitiatorId @("IQN:iqn.1991-05.com.microsoft:2012r2-node1.demo.lcl","IQN:iqn.1991-05.com.microsoft:2012r2-node2.demo.lcl")

Connect Nodes to iSCSI Target

Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to ensure best performance and availability of storage.  When we enable the MS DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.

PowerShell Commands

Set-Service -Name msiscsi -StartupType Automatic
Start-Service msiscsi
#reboot requres after claim
Enable-MSDSMAutomaticClaim -BusType iSCSI
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
$target = Get-IscsiTarget -NodeAddress *HyperVCluster*
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 192.168.1.21 -TargetPortalAddress 10.0.1.10
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 192.168.1.21 -TargetPortalAddress 10.0.1.11

Prepare the LUNs for use in the Cluster

Finally, once storage is available from both nodes, we must online, initialize and format the LUNs so that they will be ready for import into the cluster. This is only done from one node in the cluster as cluster disks must only ever be online on one node at a time.

 PowerShell Commands

#Prep Drives from one node
$Disk = get-disk|?{($_.size -eq 1GB) -or ($_.size -eq 50GB)}
$disk|Initialize-Disk -PartitionStyle GPT
$disk|New-Partition -UseMaximumSize -AssignDriveLetter| Format-Volume -Confirm:$false

Resources

What’s New for iSCSI Target Server in Windows Server 2012 R2
Storage Team Blog – iSCSI Target Server in Windows Server 2012 R2
Storage Team Blog – iSCSI Target Storage (VDS/VSS) Provider
iSCSI Target Cmdlets in Windows PowerShell
MultiPath I/O (MPIO) Cmdlets in Windows PowerShell
Bruce Langworthy – MSFT: Managing iSCSI Initiator connections with Windows PowerShell on Windows Server 2012

Check out the other post in this series!

Building a Hyper-V Cluster – Configuring Networks – Part 2/5

PowerShell Commands

# New Network LBFO Team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name LBFOTeam –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# Attach new VSwitch to LBFO team
New-VMSwitch -Name HVSwitch –NetAdapterName LBFOTeam –MinimumBandwidthMode Weight –AllowManagementOS $false

# Create vNICs on VSwitch for parent OS
# Management vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Management)" -NewName Management
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management –Access –VlanId 10
New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.101 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.102 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management -ServerAddresses 192.168.0.211, 192.168.0.212

# Cluster1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster1 –Access –VlanId 2
New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.21 -PrefixLength 24 -Confirm:$false

# Cluster2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster2 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster2)" -NewName Cluster2
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster2 –Access –VlanId 3
New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.21 -PrefixLength 24 -Confirm:$false

# iSCSI vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (iSCSI)" -NewName iSCSI
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI –Access –VlanId 1
New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.21 -PrefixLength 24 -Confirm:$false

Cluster Network Roles

In the video we leverage PowerShell to deploy converged networking to our Hyper-V hosts.  We have 2 physical network adapters to work with, but need to implement all of the network roles in the table below so that we will be able to deploy a cluster per best practices.  To accomplish this we create a team and attach a virtual switch.  This vSwitch is shared with the host and the VMs.  The host is given 4 vNICs on the virtual switch to accommodate the various types of network traffic (Storage, Cluster1, Cluster2, Management).  The failover cluster creation process will automatically detect iSCSI traffic on our storage network and set it for no cluster access.  It will also detect the default gateway on the management interface and set that network for cluster use and client use.  This is the network where we will create our cluster network name when the cluster is formed.  The remaining two network are non routed and are used for internal cluster communication.  Cluster communications, CSV traffic and cluster heart beat will use BOTH of these networks equally. One of the networks will be used for live migration traffic. In 2012R2 we have the option of using SMB3 for Live Migration to force the cluster to use both Cluster Only networks if we prefer that to the default compression option.  In the video we don’t care which of the cluster networks is preferred for live migration, so we simply name our networks Cluster1 and Cluster2.

We break the traffic into 4 vNICs rather than just using one because this will help us to ensure network traffic is efficiently utilizing the hardware.  By default the management vNIC will be using VMQ. Because we created the LBFO team using Hyper-V Port the vNICs will be balanced across the physical NICs in the team.  Because the networks roles are broken out into separate vNICs, we can also later apply QoS policies at the vNIC level to ensure important traffic has first access to the network.

When using converged networks, the multiple vNICs provide the ability to fine tune the quality of service for each type of traffic, while the high availability is provided by the LBFO team they are created on. If we had unlimited physical adapters, we would create a team for the Management and a separate team for VM Access Networks. We would use two adapters configured with MPIO for our storage network.  The remaining two cluster network would each be configured on a single physical adapter as failover clustering will automatically fail cluster communication between cluster networks in the event of failures.  Given you number of available physical adapters, you may choose many different possible configurations.  In doing so keep the network traffic and access requirements outlined below in mind.

Network   access type
Cluster Role Purpose of the   network access type Network traffic   requirements Recommended   network access
Storage None Access   storage through iSCSI or Fibre Channel (Fibre Channel does not need a network   adapter). High   bandwidth and low latency. Usually,   dedicated and private access. Refer to your storage vendor for guidelines.
Virtual machine access N/A Workloads   running on virtual machines usually require external network connectivity to   service client requests. Varies Public   access which could be teamed for link aggregation or to fail over the   cluster.
Management Cluster   and Client Managing   the Hyper-V management operating system. This network is used by Hyper-V   Manager or System Center Virtual Machine Manager (VMM). Low   bandwidth Public   access, which could be teamed to fail over the cluster.
Cluster and Cluster Shared Volumes (Cluster 1) Cluster   Only Preferred network used by the cluster for   communications to maintain cluster health. Also, used by Cluster Shared   Volumes to send data between owner and non-owner nodes. If storage access is   interrupted, this network is used to access the Cluster Shared Volumes or to   maintain and back up the Cluster Shared Volumes. Transfer virtual machine   memory and state. The cluster should have access to more than one network for   communication to ensure the cluster is highly available. Usually   low bandwidth and low latency. Occasionally, high bandwidth. Private   access
Live migration (Cluster 2) Cluster   Only High   bandwidth and low latency during migrations. Private   access
Table adapted from Hyper-V: Live Migration Network Configuration Guide

Resources

Networking Overview for 2012/R2
NIC Teaming Overview 2012/R2
Windows PowerShell Cmdlets for Networking 2012/R2

Check out the other post in this series!

Building a Hyper-V Cluster – Installing Roles & Features – Part 1/5

 PowerShell Commands

Install-WindowsFeature -Name Hyper-V, Multipath-IO, Failover-Clustering -IncludeManagementTools -Restart
Get-WindowsFeature|?{$_.Installed -eq $true}

Hardware Requirements

Hyper-V requires a 64-bit processor that includes the following:

  • Hardware-assisted virtualization. This is available in processors that include a virtualization option—specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
  • Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).
  • SLAT (Second Level Address Translation) is recommended for performance improvements and required for RemoteFX vGPUs.

Windows Server Editions Supporting Hyper-V

VM Licenses Included
2012R2 Hyper-V Server 0
2012R2 Standard 2
2012R2 Datacenter Unlimited

Check out the other post in this series!

Building a Hyper-V Cluster – Part 0/5

In this Quick Start Series we will show you how quickly and easily you can set up your own Hyper-V Cluster using server 2012R2 and iSCSI target server for shared storage.  In each video we show you how to configure the server using the GUI and then we show how to do the same configuration steps in PowerShell.
To reproduce the environment in this video series you need:
  • 2 physical computers for Hyper-V hosts
  • 1 VM/Physical server for iSCSI target software
  • Install any Windows Server 2012R2 SKU Core/Full
  • Network connectivity with at least 1 NIC between all the systems