Tag Archives: NIC Teaming

Building a Hyper-V Cluster – SCVMM Configuring Networks and Logical Switches – Part 2/6

Configuring Networks and Logical Switches

In part two of the video series we go over how to implement logical networking in System Center Virtual Machine Manager 2012R2 (SCVMM). First we provide an overview of logical networking and why it is a good idea. We then talk about each of the fabric components necessary to implement logical networking. Finally, we implement logical networking in the SCVMM GUI then show the process for implementation with PowerShell.

Logic Networking Overview

Logical networks provide a way for administrators to represent the physical network configuration in the virtual environment. This enables many features such as delegating access to network segments to specific user roles. It also eases the deployment of converged networking and can help ensure all of your Hyper-V hosts have identical network configuration. If someone makes a change to the network configuration in Hyper-V manager or Failover Cluster manager the host will be flagged as not compliant in SCVMM. The network configuration deployed via logical networking resides on the Hyper-V hosts and is not dependent on SCVMM to stay online. This configuration survives reboots even if SCVMM is offline.

When deploying logical networks the management IP must be available the entire time the switches are deployed. This can be challenging when a system only has two NICs as the management VLAN must be available as both tagged and untagged (native). Systems using more than two adapters are easier to configure as the management interface can be can be deployed locally before the system is imported into SCVMM.

Some of the logical networking features can be used when importing Hyper-V hosts with an existing virtual switch. SCVMM will detect existing configurations as ‘Standard Switches’. The administrator must manually select the logical networks in the properties of the host hardware to use virtual networks.

Networking Concepts

This diagram shows how all of the fabric components in SCVMM relate to one another.
Logical Network Components

VM Network

This component allows you to assign a network segment (VLAN) to a virtual adapter. It is created under ‘VMs and Services’ rather than Fabric-Networking. One VM network will typically be associated with one network segment. This gives the network segment a friendly name that can be used so that users do not need to know subnets or VLANIDs. It also can have permissions assigned so that only certain users can select the network segment in their virtual machines.

Logical Network

Logical networks represent a group of network segments. Logical networks may group network segments in many ways:

  • Single segments or VLAN
  • All Production segments in all sites
  • All segments in a single site

Logical Network – Network Site

Logical network have a subcomponent called a network site. A network site can be used to associate network segments with host groups. Multiple sites can exist in a single logical network. Network sites are primarily used to represent geographies or unique areas such as a DMZ.

Logical Network – Network Site – Subnet / VLAN

Subnets and VLANs can be defined within the network site. Subnets/VLANs are used to associate one or more network segments within a site. You do not have to populate the subnet field in all cases.

IP Pool

This component is used to associate a range of IP Address with a network segment. VMM can then assign these addresses statically to VMs or Hyper-V hosts.

Port Profile

Two types of port profiles exist, ‘Uplink’ and ‘Virtual Adapter’. Uplink port profiles are used to represent the network segments (VLANs) in the configuration of a physical switch port to which a Hyper-V host is connected. It is also used to define the teaming and load balancing mode for a host.

Virtual Adapter port profiles provide a way to create a collection of setting pertaining to virtual adapters. These profiles can define settings such a network optimization, security and QoS. Virtual adapter port profiles are assigned to virtual adapters in VMs and Hyper-V hosts.

Logical Switch

The logical switch component is a vSwitch deployed by SCVMM employing a network topology and configuration defined by the components listed above. It is not possible to import existing Hyper-V network configurations into SCVMM as logical switches. Both the LBFO Team and the vSwitch must be created by SCVMM. By forcing deployment with SCVMM this ensures configuration uniformity among the hosts where it is deployed.

A logical switch will have an association with one or more virtual adapter port profiles. It will also have at least one uplink port profile. When deploying a logical switch one uplink port profile is selected and this will determine the teaming and load balance modes for the vSwitch. Logical networks are the last network fabric component deployed as they depend on the other fabric components.

Example Configuration

Example Logical Network
In the video we deploy a sample configuration with two data center sites. These sites have several network segments each. The segments are grouped into 3 logical networks: Dev, Backup and Prod. Dev is only in Las Vegas while Prod is in both datacenters. Prod uses a different VLAN ID in each data center. Backup is a single stretched VLAN. Two uplink port profiles are created to describe the two possible switch port configuration for the Hyper-V hosts. In this case the switch ports are uniformly configured within a site, so one port profile is required for the Seattle datacenter and a second for the Las Vegas datacenter. These port profiles can be used to create two possible logical switches: Host and Virtual Machine. In our example we use separate physical adapters for the host traffic and the VM traffic.

References

TechNet – Configuring Logical Networking in VMM Overview
TechNet – Configuring VM Networks in VMM Illustrated Overview
MSDN Blog – Building a teamed virtual switch for Hyper-V from SCVMM

Check out the other videos in this series!

Hyper-V dVMQ

Deep Dive: Configuring dVMQ in Hyper-V

Virtual Machine Queue (VMQ) is a mechanism for mapping physical queues in a NIC to the virtual NIC in a VM partition (Parent or guest). This mapping makes the handling of network traffic more efficient. The increased efficiency results in less CPU time in the parent partition and reduced latency of network traffic. Also, without VMQ, traffic for a vSwitch on particular network interface is all handled by a single CPU core. This limits total throughput on a 10GB interface to ~2.5-4.5GBits/sec (results will depend on speed of core and nature of traffic). VMQ is especially helpful on workloads that process a large amount of traffic, such as backup or deployment servers. For dVMQ to work with RSS, the parent partition must be running Server 2012R2, otherwise RSS can not coexists with VMQ.

VMQs are a finite resource. A VMQ is allocated when a virtual machine is powered on. A queue will be assigned to each vNIC with VMQ enabled until all of the VMQs are exauted. That assignment will remain in place until the VM is powered off or migrated to another hyper-v node. If you have more vNICs in your environment than VMQs on your physical adapter then you should only enable VMQ on the vNICs that will be handling the most traffic.

Static VMQ

NovsVMQ
This image represents a Hyper-V host configured without VMQ in place. All network traffic for all the VMs is handled by a single core. With static VMQ (available in 2008R2), a VMQ is assigned to a specific CPU, and will stay on the CPU independent of workloads.

Dynamic VMQ

dVMQ
This image introduces both Dynamic Virtual Machine Queue (dVMQ) and load balancing mode for NIC teaming. These features are new to Server 2012. dVMQ is very similar to VMQ with one major difference. Dynamic VMQ will scale the number of CPU cores used to handle the VMQs across pool of CPU cores. When the network workloads are light all the dVMQs will be handled by a single CPU core, but as the network workload increases so too will the number of CPU cores used. With dVMQ in 2012 each queue can only use one CPU core at a time. Also, a vNIC can only have one VMQ assigned to it.

Sum Mode/Min Mode

In our video we recommend Hyper-V Port AND Switch Independent for a Load Balance Failover Team (LBFO) configuration on switches supporting Hyper-V workloads. This load balancing mode and teaming mode will put the vSwitch in Sum mode. This mean that we will have the sum of all the VMQs from the NICs in the LBFO team. In the case of the left image above we have 2 NICs in the team each with 2 VMQs. With the team in Sum mode we have a total of 4 VMQs to allocate to vNICs. If we use AddressHash OR Switch Dependent configuration on the team, it will be placed in Min mode. In the right image above, the same hardware now only offers 2 VMQs for vNICs. This is because inbound traffic may come in on any network interface on the team for a particular vNIC. This may be a desirable configuration if you have very few vNICs on a vSwitch (vNIC count equal or less than the fewest VMQs on any NIC in the team).

Virtual Receive Side Scaling

Server 2012R2 introduces Virtual Receive Side Scaling (vRSS). This feature works with VMQ to distribute the CPU workload of recive traffic across multiple CPU cores in the VM. This effectively eliminates the CPU core bottleneck we experience with a single vNIC. To take full advantage of this feature both the host and guest need to be 2012R2. Enabling vRSS does come with the cost of extra CPU load in the VM and parent partition. For this reason, vRSS should only be enabled on vNICs that will be exceeding 2.5GBits/Sec on a regular basis.

Base and Max CPU

Base and Max CPU properties are used to configure what CPU cores will be used by VMQ.  The base processor is the first core in the group and max is the size of the group.  For example, Hyper Threading disabled and base=2 max=4 would assign cores 2-5.  VMQ will not leverage hyper threading (HT).  If HT is enabled then only even numbered cores will be used.  For example: HT enabled, base=2 max=4 would assign even numbered cores 2-8.  When ever possible it is best to choose a base value greater than 0 (or 1 in case of HT).  Creating CPU bottlenecks on core 0 has caused performance issues in some implementations.

Requirements and Configuration for VMQ

The following are required to use VMQ:
-Server 2008R2 (Static VMQ), Server 2012(dVMQ), Server 2012R2 (dVMQ+vRSS)
-Physical NICs must support VMQ
-BelowTenGigVmqEnabled = 1 for 1GB NICs (10GB NICs are auto enabled)
Follow these steps from the video to enable VMQ
0. Enable VMQ for 1GB if required
–HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VMSMP\Parameters\BelowTenGigVmqEnabled = 1
1. Install the latest NIC driver/firmware
2. Enable VMQ in the driver for the NIC (Process will vary by NIC model and manufacturer)
3. Determine values for Base and Max CPU based on hardware configuration
4. Assign values for Base and Max CPU
5. Configure VMs

Recommendations for VMQ/dVQM/vRSS

-Use Switch Independent + Hyper-V Port to ensure the vSwitch is in SUM mode
-Always assign a base CPU other than CPU0 to ensure best performance and resiliency
-Remember when assigning Base/Max CPU using HyperThreading only even numbered cores are used
-Multiplexor Adaptors will show Base:Max of 0:0, do not change this item
-Configure Base and Max CPU for each NIC with as little overlap as possible
-Only assign Max Processor values of 1,2,4,8
–It is ok to have max processor extend past the last CPU core or number of VMQs on the NIC

Troubleshooting VMQ

Here are a few things we have seen in the field when supporting VMQ

  • Most issues with VMQ are resolved by updating to the latest version of the NIC driver!
  • VMQ appears enabled but is showing 0 queues. This may even only impact a single port on a multiport NIC.
    • *RssOrVmqPreference = 1 Must be set on all NICs that will leverage VMQ (Follow this Link for more information)
    •  HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318\[GUID of NIC Port]\*RssOrVmqPreference = 1

If you have an issue that you have experienced in your environment not listed here let me know so I can add it to the list!

PowerShell Code to Auto Configure VMQ Base/Max Processor

ConfigureVMQ.ps1

$Teams = Get-NetLbfoTeam
$proc = Get-WmiObject -Class win32_processor
$cores = $proc| Measure-Object -Property NumberOfCores -Sum|select -ExpandProperty sum
$LPs = $proc| Measure-Object -Property NumberOfLogicalProcessors -Sum|select -ExpandProperty sum
$HT = if($cores -eq $LPs){$false}else{$true}
function SetVMQsettings ($NIC, $base,$max){
    #$nic|Set-NetAdapterVmq -BaseProcessorNumber $base -MaxProcessors $max
    Write-Host "$($nic.name):: Proc:$base, Max:$max"
}
#$LPs = 4 #testing var
#$ht = $false #testing var
foreach ($team in $teams){
	$VmqAdapters = Get-NetAdapterVmq -name ($team.members)
	#Create settings
	$VMQindex = 0
	Foreach($VmqAdapter in $VmqAdapterS){
		$VmqAdapterVMQs =$VmqAdapter.NumberOfReceiveQueues
        #$VmqAdapterVMQs = 2 #testing var
		if ($VMQindex -eq 0){#first team nic
			#base proc 1+HT and max eq to num remaining cores, num queues, whatever is less
			$base = 1+[int]$ht
			$max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
            SetVMQsettings -nic $VmqAdapter -base $base -max $max
           }
        else{#all other nics exclusing first team nic
            if ($VmqAdapterVMQs -gt ($LPs/(1+$HT))){ #queues exceeds core count, so just start at base+1
                $base = 1+[int]$ht
                $max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
                SetVMQsettings -nic $VmqAdapter -base $base -max $max
            }
            else{ #cores greater than Queues so ballancing is possible
                $StepSize = [int]((($LPs/(1+$HT))-$VmqAdapterVMQs-1)/($VmqAdapters.count-1))*$VMQindex+1
                $base = $StepSize * (1+$HT)
                $max = ($LPs/(1+$HT)-1), $VmqAdapterVMQs|sort|select -Index 0
                SetVMQsettings -nic $VmqAdapter -base $base -max $max
            }
        }
		$VMQindex++
	}
}

Resources

TechNet Networking Blog: Deep Dive VMQ Part 1, 2, 3

Building a Hyper-V Cluster – Configuring Networks – Part 2/5

PowerShell Commands

# New Network LBFO Team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name LBFOTeam –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# Attach new VSwitch to LBFO team
New-VMSwitch -Name HVSwitch –NetAdapterName LBFOTeam –MinimumBandwidthMode Weight –AllowManagementOS $false

# Create vNICs on VSwitch for parent OS
# Management vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Management)" -NewName Management
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management –Access –VlanId 10
New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.101 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.102 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management -ServerAddresses 192.168.0.211, 192.168.0.212

# Cluster1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster1 –Access –VlanId 2
New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.21 -PrefixLength 24 -Confirm:$false

# Cluster2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster2 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster2)" -NewName Cluster2
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster2 –Access –VlanId 3
New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.21 -PrefixLength 24 -Confirm:$false

# iSCSI vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (iSCSI)" -NewName iSCSI
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI –Access –VlanId 1
New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.21 -PrefixLength 24 -Confirm:$false

Cluster Network Roles

In the video we leverage PowerShell to deploy converged networking to our Hyper-V hosts.  We have 2 physical network adapters to work with, but need to implement all of the network roles in the table below so that we will be able to deploy a cluster per best practices.  To accomplish this we create a team and attach a virtual switch.  This vSwitch is shared with the host and the VMs.  The host is given 4 vNICs on the virtual switch to accommodate the various types of network traffic (Storage, Cluster1, Cluster2, Management).  The failover cluster creation process will automatically detect iSCSI traffic on our storage network and set it for no cluster access.  It will also detect the default gateway on the management interface and set that network for cluster use and client use.  This is the network where we will create our cluster network name when the cluster is formed.  The remaining two network are non routed and are used for internal cluster communication.  Cluster communications, CSV traffic and cluster heart beat will use BOTH of these networks equally. One of the networks will be used for live migration traffic. In 2012R2 we have the option of using SMB3 for Live Migration to force the cluster to use both Cluster Only networks if we prefer that to the default compression option.  In the video we don’t care which of the cluster networks is preferred for live migration, so we simply name our networks Cluster1 and Cluster2.

We break the traffic into 4 vNICs rather than just using one because this will help us to ensure network traffic is efficiently utilizing the hardware.  By default the management vNIC will be using VMQ. Because we created the LBFO team using Hyper-V Port the vNICs will be balanced across the physical NICs in the team.  Because the networks roles are broken out into separate vNICs, we can also later apply QoS policies at the vNIC level to ensure important traffic has first access to the network.

When using converged networks, the multiple vNICs provide the ability to fine tune the quality of service for each type of traffic, while the high availability is provided by the LBFO team they are created on. If we had unlimited physical adapters, we would create a team for the Management and a separate team for VM Access Networks. We would use two adapters configured with MPIO for our storage network.  The remaining two cluster network would each be configured on a single physical adapter as failover clustering will automatically fail cluster communication between cluster networks in the event of failures.  Given you number of available physical adapters, you may choose many different possible configurations.  In doing so keep the network traffic and access requirements outlined below in mind.

Network   access type
Cluster Role Purpose of the   network access type Network traffic   requirements Recommended   network access
Storage None Access   storage through iSCSI or Fibre Channel (Fibre Channel does not need a network   adapter). High   bandwidth and low latency. Usually,   dedicated and private access. Refer to your storage vendor for guidelines.
Virtual machine access N/A Workloads   running on virtual machines usually require external network connectivity to   service client requests. Varies Public   access which could be teamed for link aggregation or to fail over the   cluster.
Management Cluster   and Client Managing   the Hyper-V management operating system. This network is used by Hyper-V   Manager or System Center Virtual Machine Manager (VMM). Low   bandwidth Public   access, which could be teamed to fail over the cluster.
Cluster and Cluster Shared Volumes (Cluster 1) Cluster   Only Preferred network used by the cluster for   communications to maintain cluster health. Also, used by Cluster Shared   Volumes to send data between owner and non-owner nodes. If storage access is   interrupted, this network is used to access the Cluster Shared Volumes or to   maintain and back up the Cluster Shared Volumes. Transfer virtual machine   memory and state. The cluster should have access to more than one network for   communication to ensure the cluster is highly available. Usually   low bandwidth and low latency. Occasionally, high bandwidth. Private   access
Live migration (Cluster 2) Cluster   Only High   bandwidth and low latency during migrations. Private   access
Table adapted from Hyper-V: Live Migration Network Configuration Guide

Resources

Networking Overview for 2012/R2
NIC Teaming Overview 2012/R2
Windows PowerShell Cmdlets for Networking 2012/R2

Check out the other post in this series!