Friday, February 28, 2014

Install VMWare drivers offline into a Citrix PVS vDisk

I am attempting to disable interrupt coalescing for some testing that we are doing with a latency sensitive application and I have 2 VMWare virtual machines configured as such that work as expected.

The latency settings I have done are here:
http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf

Essentially, we turned off interrupt coalescing on the physical NIC by doing the following:
Logging into the ESXi host with SSH
# esxcli network nic list 
(e1000e found as our NIC driver)

# esxcli system module parameters list -m e1000e
(we see InterruptThrottleRate as a parameter)
# esxcli system module parameters set -m e1000e -p "InterruptThrottleRate=0" 

Then we modified the VMWare virtual machines with these commands:
To do so through the vSphere Client, go to VM Settings  Options tab  Advanced General  Configuration
Parameters and add an entry for ethernetX.coalescingScheme with the value of "disabled"

We have 2 NIC's assigned to each of our PVS VM's.  One NIC is dedicated for the provisioning traffic and one for access to the rest of the network.  So I had to add 2 lines to my configuration:
ethernet0.coalescingScheme = disabled
ethernet1.coalescingScheme = disabled

For the VMWare virtual machines we just had the one line:
ethernet0.coalescingScheme = disabled


Upon powering up the VMWare virtual machines, the per packet latency dropped signficantly and our application was much more responsive.

Unfortunately, even with the settings being identical on the VMWare virtual machines and the Citrix PVS image, the PVS image will not disable interrupt coalescing, consistently showing our packets as have higher latency.  We built the vDisk image a couple years ago (~2011) and the vDisk now has outdated drivers that I suspect may be the issue.  The VMWare machines have a VMNET3 driver from August of 2013 and our PVS vDisk has a VMNET3 driver from March 2011.

To test if a newer driver would help, I did not want to reverse image the vDisk image as that is such a pain in the ass.  So I tried something else.  I made a new maintenance version of the vDisk and then mounted it on the PVS server:
C:\Users\svc_ctxinstall>"C:\Program Files\Citrix\Provisioning Services\CVhdMount.exe" -p 1 X:\vDisks-XenApp\XenApp65Tn01.14.avhd

This mounted the vDisk as drive "D:\"

I then took the newer driver from the VMWare virtual machine and injected it into the vDisk:

C:\Users\svc_ctxinstall>dism /image:D:\ /add-driver /driver:"C:\Program Files\VMware\VMware Tools\Drivers\vmxnet3"

I could see my newer driver installed alongside the existing driver:
Published Name : oem57.inf
Original File Name : vmxnet3ndis6.inf
Inbox : No
Class Name : Net
Provider Name : VMware, Inc.
Date : 08/28/2012
Version : 1.3.11.0

Published Name : oem6.inf
Original File Name : vmmouse.inf
Inbox : No
Class Name : Mouse
Provider Name : VMware, Inc.
Date : 11/17/2009
Version : 12.4.0.6

Published Name : oem7.inf
Original File Name : vmaudio.inf
Inbox : No
Class Name : MEDIA
Provider Name : VMware
Date : 04/21/2009
Version : 5.10.0.3506

Published Name : oem9.inf
Original File Name : vmxnet3ndis6.inf
Inbox : No
Class Name : Net
Provider Name : VMware, Inc.
Date : 11/22/2011
Version : 1.2.24.0

Then unmount the vDisk:
C:\Users\svc_ctxinstall>"C:\Program Files\Citrix\Provisioning Services\CVhdMount.exe" -u 1

I then set the vDisk to maintenance mode, set my PVS target device as maintenance and booted it up.  When I checked device manager I saw that the driver version was still 1.2.24.0


But clicking "Update Driver..." updated our production NIC to the newer version.  I chose "Search automatically and it found the newer, injected driver.  I then rebooted the VM and success!


Tuesday, February 25, 2014

Time how long a WMI filter call takes

The following command will time how long a WMI filter call will take to execute on your server/PC.

$query = "Select * from Win32_Processor where AddressWidth = '32'"
Measure-Command { Get-WmiObject -Query $query }