Category Archives: Quick Start

Building a Hyper-V Cluster – Using SCVMM to Deploy Host Clusters – Part 4/6

SCVMM Deploying Clusters

In this video we demonstrate how quick and easy it is to create a cluster in SCVMM.

Prerequisites

Identify Cluster Name & IP Address To Use

  • If using DHCP address will automatically be assigned
  • Can source from SCVMM IP Pool

Plan LUN usage

  • Witness
  • CSVs

Steps to deploy cluster with SCVMM

1. Verify networks are in place

Verify networks are in place

Verify networks are in place

2. Verify storage is in place

Verify storage is in place

Verify storage is in place

3. Create Cluster Wizard

  • Cluster Name (this is the name you will use for managing the cluster)
  • -Select a runas account that has administrator privileges on the hosts and the ability to create computer account in AD
Create Cluster Wizard

Create Cluster Wizard

4. Select nodes you would like to be included in the cluster

Select nodes you would like to be included in the cluster

Select nodes you would like to be included in the cluster

5. Set cluster IP address from IP Pool, manually define an IP, or leverag DHCP

Set cluster IP address from IP Pool, manually defining an IP, or leveraging DHCP

Set cluster IP address from IP Pool, manually defining an IP, or leveraging DHCP

6. Configure storage LUN uses, format, label and create CSVs

Configure storage LUN uses, format, label and create CSVs

Configure storage LUN uses, format, label and create CSVs

7. Configure virtual switches if appropriate

8. Monitor SCVMM job to verify success

Monitor SCVMM job to verify success

Monitor SCVMM job to verify success

Click ‘View Script’ to see a sample script for deploying cluster programmatically in PowerShell

Script the process

Script the process

Resources

TechNet – How to Create a Hyper-V Host Cluster in VMM

Check out the other videos in this series!

Building a Hyper-V Cluster – SCVMM Configuring SMI-S and SMB3 Storage – Part 3/6

SCVMM Configuring SMI-S and SMB3 Storage

As of Windows Server 2012, Microsoft iSCSI Target Server is a server role that enables the server to function as a storage device.

With VMM in System Center 2012 R2, you can manage an iSCSI Target Server running any of several operating system versions:

• With Windows Server 2012 on the iSCSI Target Server: You must first install the necessary SMI-S provider on the iSCSI Target Server. The provider is included in VMM media at   \amd64\Setup\msi\iSCSITargetSMISProvider.msi

• Starting with Windows Server 2012 R2 on the iSCSI Target Server: You only need to install the iSCSI Target Server Role Service and the iSCSI Target Storage Provider Role Service as shown below.

iSCSITargetInstal
The SMI-S provider follows an “embedded” provider model, where the provider is installed on the iSCSI Target Server computer. The SMI-S provider is WMI-based and manages the iSCSI Target Server by using the iSCSI Target WMI provider.
To install and manage the iSCSI Target Server (2012 R2) with VMM use the following steps:

1. Install the iSCSI Target Server and iSCSI Target Storage Provider as shown above on the server that will become the iSCSI target.
2. Using VMM, on the fabric workspace from the storage area, right-click on providers and add a storage device.
3. Choose the option “SAN and NAS devices discovered and managed by a SMI-S provider”. Click Next.
4. Choose the protocol “SMI-S WMI”, enter the name of the server with the iSCSI target, and choose a Run-AS account.  Click next.
5. The server entered will be scanned, providing the options to choose which local storage devices should be available for presentation by VMM.
6. Once the provider is installed, create LUNs, allocated them to host groups, and add storage to servers as demonstrated in the video.
We can also manage SMB 3.0 shares using VMM.  Windows Server 2012 and 2012 R2 both support SMB 3.0 and can be managed from SCVMM.  To manage SMB shares with VMM follow the steps below:

1. Using VMM, on the fabric workspace from the storage area, right-click on providers and add a storage device.
2. Choose the option “Windows based file server”.  Click Next.
3. Enter the name of the server that you will be managing and using for SMB 3.0 sharing.
4. Complete the wizard to complete adding the provider.
5. Once the provider is configured, add shares to VMM and allocated them to Hyper-V hosts as demonstrated in the video.

Check out the other videos in this series!

Building a Hyper-V Cluster – SCVMM Configuring Networks and Logical Switches – Part 2/6

Configuring Networks and Logical Switches

In part two of the video series we go over how to implement logical networking in System Center Virtual Machine Manager 2012R2 (SCVMM). First we provide an overview of logical networking and why it is a good idea. We then talk about each of the fabric components necessary to implement logical networking. Finally, we implement logical networking in the SCVMM GUI then show the process for implementation with PowerShell.

Logic Networking Overview

Logical networks provide a way for administrators to represent the physical network configuration in the virtual environment. This enables many features such as delegating access to network segments to specific user roles. It also eases the deployment of converged networking and can help ensure all of your Hyper-V hosts have identical network configuration. If someone makes a change to the network configuration in Hyper-V manager or Failover Cluster manager the host will be flagged as not compliant in SCVMM. The network configuration deployed via logical networking resides on the Hyper-V hosts and is not dependent on SCVMM to stay online. This configuration survives reboots even if SCVMM is offline.

When deploying logical networks the management IP must be available the entire time the switches are deployed. This can be challenging when a system only has two NICs as the management VLAN must be available as both tagged and untagged (native). Systems using more than two adapters are easier to configure as the management interface can be can be deployed locally before the system is imported into SCVMM.

Some of the logical networking features can be used when importing Hyper-V hosts with an existing virtual switch. SCVMM will detect existing configurations as ‘Standard Switches’. The administrator must manually select the logical networks in the properties of the host hardware to use virtual networks.

Networking Concepts

This diagram shows how all of the fabric components in SCVMM relate to one another.
Logical Network Components

VM Network

This component allows you to assign a network segment (VLAN) to a virtual adapter. It is created under ‘VMs and Services’ rather than Fabric-Networking. One VM network will typically be associated with one network segment. This gives the network segment a friendly name that can be used so that users do not need to know subnets or VLANIDs. It also can have permissions assigned so that only certain users can select the network segment in their virtual machines.

Logical Network

Logical networks represent a group of network segments. Logical networks may group network segments in many ways:

  • Single segments or VLAN
  • All Production segments in all sites
  • All segments in a single site

Logical Network – Network Site

Logical network have a subcomponent called a network site. A network site can be used to associate network segments with host groups. Multiple sites can exist in a single logical network. Network sites are primarily used to represent geographies or unique areas such as a DMZ.

Logical Network – Network Site – Subnet / VLAN

Subnets and VLANs can be defined within the network site. Subnets/VLANs are used to associate one or more network segments within a site. You do not have to populate the subnet field in all cases.

IP Pool

This component is used to associate a range of IP Address with a network segment. VMM can then assign these addresses statically to VMs or Hyper-V hosts.

Port Profile

Two types of port profiles exist, ‘Uplink’ and ‘Virtual Adapter’. Uplink port profiles are used to represent the network segments (VLANs) in the configuration of a physical switch port to which a Hyper-V host is connected. It is also used to define the teaming and load balancing mode for a host.

Virtual Adapter port profiles provide a way to create a collection of setting pertaining to virtual adapters. These profiles can define settings such a network optimization, security and QoS. Virtual adapter port profiles are assigned to virtual adapters in VMs and Hyper-V hosts.

Logical Switch

The logical switch component is a vSwitch deployed by SCVMM employing a network topology and configuration defined by the components listed above. It is not possible to import existing Hyper-V network configurations into SCVMM as logical switches. Both the LBFO Team and the vSwitch must be created by SCVMM. By forcing deployment with SCVMM this ensures configuration uniformity among the hosts where it is deployed.

A logical switch will have an association with one or more virtual adapter port profiles. It will also have at least one uplink port profile. When deploying a logical switch one uplink port profile is selected and this will determine the teaming and load balance modes for the vSwitch. Logical networks are the last network fabric component deployed as they depend on the other fabric components.

Example Configuration

Example Logical Network
In the video we deploy a sample configuration with two data center sites. These sites have several network segments each. The segments are grouped into 3 logical networks: Dev, Backup and Prod. Dev is only in Las Vegas while Prod is in both datacenters. Prod uses a different VLAN ID in each data center. Backup is a single stretched VLAN. Two uplink port profiles are created to describe the two possible switch port configuration for the Hyper-V hosts. In this case the switch ports are uniformly configured within a site, so one port profile is required for the Seattle datacenter and a second for the Las Vegas datacenter. These port profiles can be used to create two possible logical switches: Host and Virtual Machine. In our example we use separate physical adapters for the host traffic and the VM traffic.

References

TechNet – Configuring Logical Networking in VMM Overview
TechNet – Configuring VM Networks in VMM Illustrated Overview
MSDN Blog – Building a teamed virtual switch for Hyper-V from SCVMM

Check out the other videos in this series!

Building a Hyper-V Cluster – SCVMM Installation and Introduction – Part 1/6

SCVMM Installation and Introduction

In this video the focus is on the installation of SCVMM and an introduction to the console. Viewers of this video will learn the installation scenarios for SCVMM, software requirements, hardware requirements, and the requirements for installing SCVMM into a virtual machine. Installation considerations will be discussed, including the use of service accounts in AD and the distributed key management store in AD. Finally, the video ends in a complete demo of installation, and a walk-through of the console.

Installation Scenarios

For small environments with less than 500 VMs, fewer than 20 hosts and fewer than 5 administrators it is possible to install VMM in an all in one virtual machine running the library and database locally. Larger environments will experience performance benefits from splitting out database components. For very large environments, it may be better to dedicate physical resources to both the database and also the SCVMM server. It may be advantageous to split the library role onto another system if you expect to be storing many VMs or templates in the library. When using a VM, it is best practice to dedicate a VHDX file for the library. This will make it easier to deal with growth of the library data long term.

Another consideration to make is running SCVMM in a highly available configuration. If SCVMM is running inside a virtual machine on a cluster, then the service is already protected from hardware failures. If the VM experiences a software issue, such failed software maintenance, the physical cluster will not protect against this kind of outage. To protect against guest OS issues the SCVMM installation itself can also be clustered.

Software Requirements

Before beginning the installation of SCVMM the following software prerequisites exist for the computer you install SCVMM on:

    • Operating System must be at least Windows Server 2012 or 2012R2
    • Operating System can be Standard or Datacenter edition
    • Operating System can run Server with a GUI or Server Core
    • The system must be a member of an Active Directory Domain
    • The name of the computer must not exceed 15 characters and must not contain ‘-SCVMM’
    • WinRM3.0 (or greater) service set to start automatically (this is included in the Windows install)
    • Microsoft .NET Framework 4 or 4.5 (this is included in the Windows install)
    • Windows ADK for Windows 8 – Available from Microsoft Download Center
  • Supported version of SQL Standard/Enterprise (MSSQL 2008R2 or MSSQL 2012)
  • SQL Native Client and SQL Server Command Line Utilities matching SQL version used
    SQL Server 2008 R2 Feature Pack (look in installation instructions for the actual file download
    SQL Server 2012 Feature Pack (look in installation instructions for the actual file download)-

Hardware Requirements

The following requirements exist for the system you are installing VMM on:

  • 4 cores @ 3.6Ghz (2 cores for environments with less than 150 hosts)
  • 8GB RAM (Install will fail below 2GB and give a warning with less than 4GB)
  • 10GB Free Hard disk space for remote SQL and library, at least 150GB for local library and SQL install

Can SCVMM run on a VM?

Running SCVMM in a Virtual Machine is fully supported and a common practice. VMM can manage the same host where the SCVMM VM is running. If this is the scenario you plan to deploy simple be cognizant that changes you make to the cluster SCVMM is hosted on my have an impact on your ability to reach the SCVMM server. Avoid performing ‘Quick Migration’ on the VM and instead opt for Live Migrations.

If you are using a Virtual Machine with Dynamic Memory enable to run SCVMM it is recommended to set the startup memory to 4GBs. This will prevent the installer from generating a low memory warning and will ensure the VM has the baseline amount of memory for acceptable performance.

Installation Considerations

Here are a few additional details you should keep in mind when performing the SCVMM installation:

  • You must be an Administrator on the server where VMM will be installed
  • It is recommended you prepare a service account in Active Directory for SCVMM (required for HA VMM installs)
  • Prepare a DKM store in AD for DB Encryption Key Storage (Required for HA VMM Installs)

Distributed Key Management Store

By default, without DKM storage SVMM will store the database encryption keys in the registry. This complicates recovery if the VM should be lost and also is not supported for HA installations of VMM. By enabling DKM storage during the SCVMM install the encryption keys are stored in Active Directory. Before you begin the SCVMM installation, you should ensure you have created a container in the AD DS. You can use ADSI to create this container as demonstrated in the video. Ensure that the service account for VMM has the ‘Full Control’ permission on the container. Creating and assigning permissions to this container will require an account with Domain Admin privileges.

Resources

TechNet – System Requirements: VMM Management Server in System Center 2012 and in System Center 2012 SP1
TechNet – How to Install a VMM Management Server
MSDN – Installing the Windows ADK
TechNet – Virtual Machine Manager
TechNet – Configuring Distributed Key Management in VMM

Check out the other videos in this series!

Deploying and Using SCVMM – Part 0/6

In this Quick Start Series we will show you how quickly and easily you can set up your own SCVMM and Hyper-V cluster.  In this series we are using Server 2012R2, iSCSI Target Server for shared storage and SCVMM as a single management interface for the virtual infrastructure.  In each video we will show you how to install and configure the various components to set up a Microsoft virtualization solution leveraging SCVMM.  We will show how to perform these configuration operations in GUI and then again in PowerShell.
To reproduce the environment in this video series you need:
  • 2 physical computers for Hyper-V hosts
  • 1 VM/Physical server for iSCSI target software (Running Windows 2012R2)
  • 1 VM/Physical server for SCVMM (Running Windows 2012R2)
  • Install any Windows Server 2012R2 SKU Core/Full
  • Network connectivity with at least 1 NIC between all the systems

Video 1 – SCVMM Installation & Introduction
Video 2 – Configuring Networks & Logical Switches
Video 3 – Configuring SMI-S  and SMB3 Storage
Video 4 – Deploy Clusters In VMM
Video 5 – SCVMM Patch Management
Video 6 – SCVMM VM Management

Building a Hyper-V Cluster – Creating Virtual Machines – Part 5/5

Creating and Managing VMs

In this video we will create highly available VMs.  First we create the virtual machines in the GUI then in PowerShell.

When creating a VM, ensure that you always check the box to store the virtual machine in a different location.  If you don’t do this, then the VM’s configuration file and VHD files will be put in the Hyper-V default location.  This is bad because it will be hard to tell what VHDs are associated configuration files.  If you check the store virtual machine in a different location check box all of the VM’s components will be stored in a single folder.  This will make your management life much easier!  Also, if the VM will be part of the cluster, be sure to create and manage the VM in failover cluster manager rather than Hyper-V manager.

Store the virtual machine in a different location

PowerShell PowerShell Code

#Create a new VM
New-VM -Name JasonVM -Path c:\ClusterStorage\CSV1

#Add the VM to the cluster so it becomes highly available
Add-ClusterVirtualMachineRole -VMName JasonVM

#Start the VM and live migrate it to another cluster node
Start-ClusterGroup -Name JasonVM
Move-ClusterVirtualMachineRole -Name JasonVM

#Create and remove VM Snapshot/Checkpoints
Checkpoint-VM -Name JasonVM
Get-VM -Name JasonVM| Get-VMSnapshot
Get-VM -Name JasonVM| Get-VMSnapshot| Remove-VMSnapshot

#Shut down the VM
Stop-VM -Name JasonVM

#List the Hyper-V and Failover Clustering commands
Get-Command -Module hyper-v, failoverclusters

Resources

MSDN: Virtual Machine Live Migration Overview
TechNet:Windows PowerShell: Create Hyper-V virtual machines

Check out the other post in this series!

Building a Hyper-V Cluster – Building The Hyper-V Cluster – Part 4/5

In this video we validate our cluster node configuration and then create the cluster. Once the cluster is formed, we update the names of various cluster components to match their function. Finally we set up a CSV on the cluster.

In Server 2012R2 the cluster validation well help to ensure that the nodes in the cluster are configured identically and correctly. By passing the cluster validation and using hardware certified for 2012R2, we are ensuring our cluster will be in a supported configuration.

When we form the cluster we only need two items, the name and IP of the cluster. The name we specify will be used to create a computer account in active directory. If the using running the new-cluster command does not have rights to create computer accounts in AD the account may be prestaged. If this is done, the account should be disabled and the user should have full permission on the account.

PowerShell Command

Test-Cluster -node 2k12r2-node1,2k12r2-node2
New-Cluster -Name HVC1 -node 2k12r2-node1,2k12r2-node2 -staticAddress 192.168.0.100

#Update Cluster Network Names to Match Function
(Get-ClusterNetwork| ?{$_.Address -eq "192.168.1.0"}).name = "Managment"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.1.0"}).name = "iSCSI"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.2.0"}).name = "Cluster1"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.3.0"}).name = "Cluster2"

#Update Cluster Disk Names to Match Function
(Get-ClusterGroup -Name "Cluster group"| Get-ClusterResource |?{$_.ResourceType -eq "Physical Disk"}).name = "Witness"
(Get-ClusterGroup "available storage"| Get-ClusterResource).name = "CSV1"

#Configure the CSV
Get-ClusterResource -Name "CSV1"| Add-ClusterSharedVolume
Rename-Item -name C:\ClusterStorage\Volume1 -NewName C:\ClusterStorage\CSV1

Cluster Network Roles

In our example we did not need to change anything other than the cluster network’s name. This is because the excellent work the Windows Failover Clustering team has done on the cluster creation wizard. Automatically each cluster network will be configured with the correct cluster role and metric. These setting can be used to fine tune cluster network behavior, but in most cases are best left in default configuration.
We can use Get-ClusterNetwork to inspect the values for role and metric:
PS C:\> Get-ClusterNetwork -Cluster HVC0 | Format-Table Name, role, Metric, AutoMetric -AutoSize
Name Role Metric AutoMetric
—- —- —— ———-
Cluster1 1 30384 True
Cluster2 1 39841 True
iSCSI 0 79842 True
Management 3 79841 True

We will connect to the cluster network name using the role 3 network. The cluster networks are role 1 and will be used for cluster communications. iSCSI communication was detected on the storage network so it was created as a role 1 network, blocked for use by the cluster.

We will do a deep dive on cluster networks in another video.

Check out the other post in this series!

Building a Hyper-V Cluster – iSCSI Storage – Part 3/5

Configuring iSCSI storage for a Hyper-V Cluster

In this video we use iSCSI target server built in to Server 2012R2 to present shared storage to our cluster nodes.

Install and Configure iSCSI Target

We must first install the FS-iSCSITarget-Server feature. Once this is installed we will create a target on our storage server. Next we will create virtual disks for the witness disk and CSV. These virtual disks will be attached to the target and presented to our cluster nodes as LUNs. Finally, we will configure the target to allow access from the IQNs of our hyper-v host nodes.  We can discover the IQN of the hyper-v hosts by running the command: (Get-InitiatorPort).NodeAddress on the cluster nodes.

 PowerShell Commands

#Install target server
Install-WindowsFeature -Name FS-iSCSITarget-Server, iSCSITarget-VSS-VDS -IncludeManagementTools -Restart
#create target
New-IscsiServerTarget -TargetName HyperVCluster
New-IscsiVirtualDisk -Path c:\HVC1-W.vhdx -SizeBytes 1GB
New-IscsiVirtualDisk -Path c:\HVC1-CSV.vhdx -SizeBytes 50GB
Add-IscsiVirtualDiskTargetMapping -TargetName HyperVCluster -Path C:\HVC1-W.vhdx
Add-IscsiVirtualDiskTargetMapping -TargetName HyperVCluster -Path C:\HVC1-CSV.vhdx
#(Get-InitiatorPort).NodeAddress
#Allow nodes to access target LUNs
Set-IscsiServerTarget -TargetName HyperVCluster -InitiatorId @("IQN:iqn.1991-05.com.microsoft:2012r2-node1.demo.lcl","IQN:iqn.1991-05.com.microsoft:2012r2-node2.demo.lcl")

Connect Nodes to iSCSI Target

Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to ensure best performance and availability of storage.  When we enable the MS DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.

PowerShell Commands

Set-Service -Name msiscsi -StartupType Automatic
Start-Service msiscsi
#reboot requres after claim
Enable-MSDSMAutomaticClaim -BusType iSCSI
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
$target = Get-IscsiTarget -NodeAddress *HyperVCluster*
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 192.168.1.21 -TargetPortalAddress 10.0.1.10
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 192.168.1.21 -TargetPortalAddress 10.0.1.11

Prepare the LUNs for use in the Cluster

Finally, once storage is available from both nodes, we must online, initialize and format the LUNs so that they will be ready for import into the cluster. This is only done from one node in the cluster as cluster disks must only ever be online on one node at a time.

 PowerShell Commands

#Prep Drives from one node
$Disk = get-disk|?{($_.size -eq 1GB) -or ($_.size -eq 50GB)}
$disk|Initialize-Disk -PartitionStyle GPT
$disk|New-Partition -UseMaximumSize -AssignDriveLetter| Format-Volume -Confirm:$false

Resources

What’s New for iSCSI Target Server in Windows Server 2012 R2
Storage Team Blog – iSCSI Target Server in Windows Server 2012 R2
Storage Team Blog – iSCSI Target Storage (VDS/VSS) Provider
iSCSI Target Cmdlets in Windows PowerShell
MultiPath I/O (MPIO) Cmdlets in Windows PowerShell
Bruce Langworthy – MSFT: Managing iSCSI Initiator connections with Windows PowerShell on Windows Server 2012

Check out the other post in this series!

Building a Hyper-V Cluster – Configuring Networks – Part 2/5

PowerShell Commands

# New Network LBFO Team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name LBFOTeam –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# Attach new VSwitch to LBFO team
New-VMSwitch -Name HVSwitch –NetAdapterName LBFOTeam –MinimumBandwidthMode Weight –AllowManagementOS $false

# Create vNICs on VSwitch for parent OS
# Management vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Management)" -NewName Management
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management –Access –VlanId 10
New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.101 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.102 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management -ServerAddresses 192.168.0.211, 192.168.0.212

# Cluster1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster1 –Access –VlanId 2
New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.21 -PrefixLength 24 -Confirm:$false

# Cluster2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster2 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster2)" -NewName Cluster2
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster2 –Access –VlanId 3
New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.21 -PrefixLength 24 -Confirm:$false

# iSCSI vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (iSCSI)" -NewName iSCSI
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI –Access –VlanId 1
New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.21 -PrefixLength 24 -Confirm:$false

Cluster Network Roles

In the video we leverage PowerShell to deploy converged networking to our Hyper-V hosts.  We have 2 physical network adapters to work with, but need to implement all of the network roles in the table below so that we will be able to deploy a cluster per best practices.  To accomplish this we create a team and attach a virtual switch.  This vSwitch is shared with the host and the VMs.  The host is given 4 vNICs on the virtual switch to accommodate the various types of network traffic (Storage, Cluster1, Cluster2, Management).  The failover cluster creation process will automatically detect iSCSI traffic on our storage network and set it for no cluster access.  It will also detect the default gateway on the management interface and set that network for cluster use and client use.  This is the network where we will create our cluster network name when the cluster is formed.  The remaining two network are non routed and are used for internal cluster communication.  Cluster communications, CSV traffic and cluster heart beat will use BOTH of these networks equally. One of the networks will be used for live migration traffic. In 2012R2 we have the option of using SMB3 for Live Migration to force the cluster to use both Cluster Only networks if we prefer that to the default compression option.  In the video we don’t care which of the cluster networks is preferred for live migration, so we simply name our networks Cluster1 and Cluster2.

We break the traffic into 4 vNICs rather than just using one because this will help us to ensure network traffic is efficiently utilizing the hardware.  By default the management vNIC will be using VMQ. Because we created the LBFO team using Hyper-V Port the vNICs will be balanced across the physical NICs in the team.  Because the networks roles are broken out into separate vNICs, we can also later apply QoS policies at the vNIC level to ensure important traffic has first access to the network.

When using converged networks, the multiple vNICs provide the ability to fine tune the quality of service for each type of traffic, while the high availability is provided by the LBFO team they are created on. If we had unlimited physical adapters, we would create a team for the Management and a separate team for VM Access Networks. We would use two adapters configured with MPIO for our storage network.  The remaining two cluster network would each be configured on a single physical adapter as failover clustering will automatically fail cluster communication between cluster networks in the event of failures.  Given you number of available physical adapters, you may choose many different possible configurations.  In doing so keep the network traffic and access requirements outlined below in mind.

Network   access type
Cluster Role Purpose of the   network access type Network traffic   requirements Recommended   network access
Storage None Access   storage through iSCSI or Fibre Channel (Fibre Channel does not need a network   adapter). High   bandwidth and low latency. Usually,   dedicated and private access. Refer to your storage vendor for guidelines.
Virtual machine access N/A Workloads   running on virtual machines usually require external network connectivity to   service client requests. Varies Public   access which could be teamed for link aggregation or to fail over the   cluster.
Management Cluster   and Client Managing   the Hyper-V management operating system. This network is used by Hyper-V   Manager or System Center Virtual Machine Manager (VMM). Low   bandwidth Public   access, which could be teamed to fail over the cluster.
Cluster and Cluster Shared Volumes (Cluster 1) Cluster   Only Preferred network used by the cluster for   communications to maintain cluster health. Also, used by Cluster Shared   Volumes to send data between owner and non-owner nodes. If storage access is   interrupted, this network is used to access the Cluster Shared Volumes or to   maintain and back up the Cluster Shared Volumes. Transfer virtual machine   memory and state. The cluster should have access to more than one network for   communication to ensure the cluster is highly available. Usually   low bandwidth and low latency. Occasionally, high bandwidth. Private   access
Live migration (Cluster 2) Cluster   Only High   bandwidth and low latency during migrations. Private   access
Table adapted from Hyper-V: Live Migration Network Configuration Guide

Resources

Networking Overview for 2012/R2
NIC Teaming Overview 2012/R2
Windows PowerShell Cmdlets for Networking 2012/R2

Check out the other post in this series!

Building a Hyper-V Cluster – Installing Roles & Features – Part 1/5

 PowerShell Commands

Install-WindowsFeature -Name Hyper-V, Multipath-IO, Failover-Clustering -IncludeManagementTools -Restart
Get-WindowsFeature|?{$_.Installed -eq $true}

Hardware Requirements

Hyper-V requires a 64-bit processor that includes the following:

  • Hardware-assisted virtualization. This is available in processors that include a virtualization option—specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
  • Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).
  • SLAT (Second Level Address Translation) is recommended for performance improvements and required for RemoteFX vGPUs.

Windows Server Editions Supporting Hyper-V

VM Licenses Included
2012R2 Hyper-V Server 0
2012R2 Standard 2
2012R2 Datacenter Unlimited

Check out the other post in this series!