Guest NeerinckB Posted February 14, 2008 Posted February 14, 2008 Hi, I suspect Windows Server to indicate in Task Manager High CPU Utilization when in fact there are Disk I/O Latency and not heavy CPU usage. I would like to have feedback from other professional about this. I'm currently working with hundred of Windows Server that are virtualized using VMware, and we get often complain about Windows Server VM that have High CPU Usage that we can't explain while we suspect the storage to be responding slowly. Thanks, -- Bernard Neerinck MS MCSE HP MASE VMware Certified
Guest Edwin vMierlo [MVP] Posted February 14, 2008 Posted February 14, 2008 Re: Disk latency generate High CPU I would expect it the other way around if you have high IO latency you would see lower CPU utilization "because your application is *slowed down* by not able to quickly do IO's" often I actually have seen this on server applications where a disk bottleneck was solved and a CPU bottleneck appeared, as suddenly the application was doing fast IO's, and now was *hammering* the CPU with more "transactions" "NeerinckB" <NeerinckB@discussions.microsoft.com> wrote in message news:D77D617B-A86E-4673-AE5B-B4E977099D82@microsoft.com... > Hi, > > I suspect Windows Server to indicate in Task Manager High CPU Utilization > when in fact there are Disk I/O Latency and not heavy CPU usage. > > I would like to have feedback from other professional about this. > > I'm currently working with hundred of Windows Server that are virtualized > using VMware, and we get often complain about Windows Server VM that have > High CPU Usage that we can't explain while we suspect the storage to be > responding slowly. > > Thanks, > > > -- > Bernard Neerinck > MS MCSE > HP MASE > VMware Certified
Guest Mark Perry Posted June 16, 2008 Posted June 16, 2008 Disk latency High CPU Disk latency High CPU Hi I can confirm that this is an issue, and also have figured out a way around this. The High CPU latency seems related to poor disk IO as it is wasting cycles waiting for data. If you run up perfmon and check the stats you will probably see under heavy utilisation the disk I/O avg disk que flatlining at the top of the graph as the ques go off the chart. The CPU and memory paging also follow suit. Doing some performance testing I was able to see that I could not get a sustanined write to disk over 10MB/s when the underlying OS could write to disk at near 300MB/s. Web portal boxes tend to be fine like this but any Server running database apps e.g. Exchange, SQL tend to be horrific under load. I have tested under both Server and ESX and experienced the same issue. Therefore I concluded that the virtual preallocated vmdk disk was the cause as on Linux it resembles a network block device v similar to the Linux loopback device. I then created a loopback device to test performance which twice as fast as a vmdk block device, I also tested VirtualBox which gave similar results as the loopback so looseley concluded that it was using the same technology. Not sure why vmwares version of a loopback device is half speed of Linux own loopback device though??? Since these were still horrific speeds I decided to migrate my existing VM's to raw physical disks, ESX does not support this and if you look at ESX speed test white papers ESX iSCSI and Fibre Channel Sppeds are very good, not least becasue these disks are raw.... Therefore I built a shiny new server, the only problem I encountered was Vmware raw disk suppport does not include Linux LVM support (probably not an issue if your running M$ host on basic disks not sure about dynamic or a flat ext partition, don't use XFS if you want snapshotting it hangs), so Vmware wouldn't see my raw partitions on LVM no problem just get the vmgbd (vmware generic block device) patch and create soft links from /dev/sdx -> /dev/VG-Vol-Group/LV-Logical-Volume and point to those. While setting up use the root account as you spend wasteful time getting access to the partitions. Once I had raw partitoins set up I just created loopback devices with my preallocated (only works with preallocated disks) using losetup command and then dd to dump the data from the vmdk to raw disk. The raw disks must be the same size or bigger that the original disk unless you really want to mess about. In the case of larger physical disk you can boot your os, make sure it works then shutdown and run ntfsresize against it to expand the ntfs partition to fill the partition. Once done you now have near native disk speeds. Sorry if this isn't clear but I went through a lot of pain to get this far...
Recommended Posts