kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Workload spikes on KVM host when doing IO on a guest...
@ 2012-05-20  0:55 Erik Brakkee
  2012-05-20 13:54 ` Avi Kivity
  0 siblings, 1 reply; 4+ messages in thread
From: Erik Brakkee @ 2012-05-20  0:55 UTC (permalink / raw)
  To: kvm

Hi,


I am seeing high workload spikes of approx. 15 when I do IO inside a KVM 
guest, for instance

   dd if=/dev/zero bs=1G count=1 of=hog

When I execute a similar command on the host to write a file on the same 
physical disk, the workload only goes to about 3.

I am using virtio on the guest with cache mode none. Also, I am using 
the noop IO scheduler on the guest and the deadline IO scheduler on the 
host.
The guest is allocated a logical volume from the host.

When I execute the dd command on the guest, it finishes almost 
instantaneously but when I execute it on the host I have to wait for 
approx 10 seconds. Specifically,
on the guest I see a transfer speed of approx. 600 MB/s and on the host 
I get 75.9MB/s. The figure for the host is most reliable as this is 
close to what the hard disks can handle (WD enterprise class SATA hard 
disks).

What appears to be happening is that somehow it forwards all IO from the 
guest immediately to the host, just as if write back caching was used.

When I look at the output of 'virsh dumpxm <vmname>' I get this as part 
of the output which indicates that cache="none" is actually used

<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/bootdisks/sparrow'/>
<target dev='sda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>

The host is an opensuse 11.3 system (Linux falcon 2.6.34.10-0.6-default 
#1 SMP 2011-12-13 18:27:38 +0100 x86_64 x86_64 x86_64 GNU/Linux).
The kvm version is

falcon:~ # rpm -qa | grep kvm
kvm-0.12.5-1.8.1.x86_64

Is this some known issue in this version of KVM and should I simply 
upgrade (or replace the host with a centos 6.2 system). Or is there a 
simple configuration that can fix this?

Cheers
   Erik

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Workload spikes on KVM host when doing IO on a guest...
  2012-05-20  0:55 Workload spikes on KVM host when doing IO on a guest Erik Brakkee
@ 2012-05-20 13:54 ` Avi Kivity
       [not found]   ` <4FB923BB.9070306@brakkee.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Avi Kivity @ 2012-05-20 13:54 UTC (permalink / raw)
  To: Erik Brakkee; +Cc: kvm

On 05/20/2012 03:55 AM, Erik Brakkee wrote:
> Hi,
>
>
> I am seeing high workload spikes of approx. 15 when I do IO inside a
> KVM guest, for instance
>
>   dd if=/dev/zero bs=1G count=1 of=hog
>
> When I execute a similar command on the host to write a file on the
> same physical disk, the workload only goes to about 3.

This is not surprising.  Each I/O request executes in a thread.

>
> I am using virtio on the guest with cache mode none. Also, I am using
> the noop IO scheduler on the guest and the deadline IO scheduler on
> the host.
> The guest is allocated a logical volume from the host.
>

With logical volumes, you can use -drive ...,aio=native to avoid the
threads.  The load will disappear.

> When I execute the dd command on the guest, it finishes almost
> instantaneously but when I execute it on the host I have to wait for
> approx 10 seconds. Specifically,
> on the guest I see a transfer speed of approx. 600 MB/s and on the
> host I get 75.9MB/s. The figure for the host is most reliable as this
> is close to what the hard disks can handle (WD enterprise class SATA
> hard disks).

try dd oflag=direct to force the data to disk.  No idea why the host
doesn't finish instantaneously.

>
> What appears to be happening is that somehow it forwards all IO from
> the guest immediately to the host, just as if write back caching was
> used.

Write back caching is indeed used, since you did not specify oflag=direct.

> Is this some known issue in this version of KVM and should I simply
> upgrade (or replace the host with a centos 6.2 system). Or is there a
> simple configuration that can fix this?

Nothing is broken, so it doesn't need fixing.  The high load is not an
indication of anything.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Workload spikes on KVM host when doing IO on a guest...
       [not found]       ` <4FB929DE.8000500@brakkee.org>
@ 2012-05-20 17:49         ` Avi Kivity
  2013-04-03 20:19           ` Erik Brakkee
  0 siblings, 1 reply; 4+ messages in thread
From: Avi Kivity @ 2012-05-20 17:49 UTC (permalink / raw)
  To: Erik Brakkee; +Cc: KVM list

On 05/20/2012 08:29 PM, Erik Brakkee wrote:
> Avi Kivity wrote:
>> On 05/20/2012 08:02 PM, Erik Brakkee wrote:
>>> [...]
>>> Thanks for this information. Unfortunately, io="native" in domain.xml
>>> is not supported by opensuse 11.3. It is supported in 12.1 so it
>>> appears that the version of KVM I have on the server is too old. I
>>> tried it on a system running the newer version and indeed, as you say
>>> the load disappears completely when using io="native".
>>>
>>> I am going to update the host now (probably to centos 6.2) to get rid
>>> of this problem.
>> To be clear: it's not a problem.  It's completely normal, and doesn't
>> affect anything.
> The only problem with it is that it leads to high workload spikes,
> which is normally a reason to have a good look at what is going on. In
> this case, the newer version of KVM should help eliminate these
> spikes, so that the next time I see a spike in the workload I know
> that I have to look into something.

Problem is, it doesn't mean anything important.  It's the count of
running threads plus the count of threads uninterruptibly waiting on a
mutex.  It's absolutely meaningless.

> I noticed the issue after I started monitoring the server and all VMs
> using zabbix (www.zabbix.com) and made a graph showing the workload of
> the hosts and that of all guests. See below. Falcon is the host and
> sparrow is a continuous integration server which is creating an
> updated RPM repository and writing a lot of files.
>
>
>
> Still the whole area of workload is a bit confusing to me. Is the
> effect of native IO simply that some of the IO work is not being
> counted anymore as part of the workload because the work is no longer
> done in user space?

No, it no longer holds a mutex. Yet it does exactly the same thing. 
That's an indication that the counter is meaningless.

(If the counter doesn't drop on an idle machine, that usually indicates
trouble; but that's not the case)

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Workload spikes on KVM host when doing IO on a guest...
  2012-05-20 17:49         ` Avi Kivity
@ 2013-04-03 20:19           ` Erik Brakkee
  0 siblings, 0 replies; 4+ messages in thread
From: Erik Brakkee @ 2013-04-03 20:19 UTC (permalink / raw)
  To: Avi Kivity; +Cc: KVM list

Avi Kivity wrote:
> On 05/20/2012 08:29 PM, Erik Brakkee wrote:
>> Avi Kivity wrote:
>>> On 05/20/2012 08:02 PM, Erik Brakkee wrote:
>>>> [...]
>>>> Thanks for this information. Unfortunately, io="native" in domain.xml
>>>> is not supported by opensuse 11.3. It is supported in 12.1 so it
>>>> appears that the version of KVM I have on the server is too old. I
>>>> tried it on a system running the newer version and indeed, as you say
>>>> the load disappears completely when using io="native".
>>>>
>>>> I am going to update the host now (probably to centos 6.2) to get rid
>>>> of this problem.
>>> To be clear: it's not a problem.  It's completely normal, and doesn't
>>> affect anything.
>> The only problem with it is that it leads to high workload spikes,
>> which is normally a reason to have a good look at what is going on. In
>> this case, the newer version of KVM should help eliminate these
>> spikes, so that the next time I see a spike in the workload I know
>> that I have to look into something.
In the mean time I have migrated the host machine to run centos 6.2 and 
later 6.3. There I could use the aio=native option together with 
cache=none and I saw a significant drop in thw workload while doing a 
lot of IO on the guest.

However, after upgrading to centos 6.4, I now see huge workload spikes 
again, but the virtual machine configuration is the same. Also, I am 
seeing a high usage of swap on the host machine. It almost looks as if 
the aio=native option is being ignored or could it be a different problem?

Here is one of the processes:

qemu      3026  4.6 17.9 4751536 4419208 ?     Sl   Mar27 136:21 
/usr/libexec/qemu-kvm -name sparrow -S -M rhel6.2.0 -enable-kvm -m 4096 
-smp 2,sockets=2,cores=1,threads=1 -uuid 
1389eb3f-8f26-686e-93c4-21267a66ec52 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/sparrow.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc 
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/dev/bootdisks/sparrow,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 
-drive 
file=/dev/sparrow/disk,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1 
-netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=26 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:01:01:03,bus=pci.0,addr=0x5 
-netdev tap,fd=27,id=hostnet1,vhost=on,vhostfd=28 -device 
virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:01:01:04,bus=pci.0,addr=0x7 
-chardev pty,id=charserial0 -device 
isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:2 -vga cirrus 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

I can also see that the swap space is exclusively used by qemu-kvm:

    [root@falcon bin]# ./examineswap
    ...
    Overall swap used: 1264924 kB
    ========================================
    kB      pid     name
    ========================================
    575092  3026    qemu-kvm
    391468  2882    qemu-kvm
    298364  2953    qemu-kvm


(see at the end of the examineswap script).

This snapshot of the swap space is taken shortly after enabling swap and 
it continues to fill up the entire swap space, with still a lot of 
memory free

    [root@falcon bin]# free
                  total       used       free     shared    buffers cached
    Mem:      24662972   24460920     202052          0 10904048      40712
    -/+ buffers/cache:   13516160   11146812
    Swap:      2097144    1508516     588628

Also, I have set the swappiness to 0 but it does not help:

    [root@falcon bin]# sysctl vm.swappiness
    vm.swappiness = 0


Details on the OS and KVM:

    [root@falcon ~]# uname -a
    Linux falcon.fritz.box 2.6.32-358.2.1.el6.x86_64 #1 SMP Wed Mar 13
    00:26:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    [root@falcon ~]# cat /etc/issue
    CentOS release 6.4 (Final)
    Kernel \r on an \m


    [root@falcon ~]# rpm -qa | grep kvm
    qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.2.x86_64



I am also getting a workload on the host of approximately 20 now whereas 
the workload on the VM doing the IO is just 6. What I am doing at this 
time is creating a backup over iSCSI with a target on the physical host 
and an initiator on the VM. The VM has created snapshot logical volumes 
and is copying data to the iSCSI target. More spcifically, I am using 
this script that I developed myself: http://wamblee.org/snapshot.html

Do you have any idea what this could be?

Cheers
   Erik

PS. The examineswap script I downloaded
#!/bin/bash

     # find-out-what-is-using-your-swap.sh
     # -- Get current swap usage for all running processes
     # --
     # -- rev.0.3, 2012-09-03, Jan Smid          - alignment and 
intendation, sorting
     # -- rev.0.2, 2012-08-09, Mikko Rantalainen - pipe the output to 
"sort -nk3" to get sorted output
     # -- rev.0.1, 2011-05-27, Erik Ljungstrom   - initial version


SCRIPT_NAME=`basename $0`;
SORT="kb";                 # {pid|kB|name} as first parameter, [default: kb]
[ "$1" != "" ] && { SORT="$1"; }

[ ! -x `which mktemp` ] && { echo "ERROR: mktemp is not available!"; exit; }
MKTEMP=`which mktemp`;
TMP=`${MKTEMP} -d`;
[ ! -d "${TMP}" ] && { echo "ERROR: unable to create temp dir!"; exit; }

 >${TMP}/${SCRIPT_NAME}.pid;
 >${TMP}/${SCRIPT_NAME}.kb;
 >${TMP}/${SCRIPT_NAME}.name;

SUM=0;
OVERALL=0;
     echo "${OVERALL}" > ${TMP}/${SCRIPT_NAME}.overal;

for DIR in `find /proc/ -maxdepth 1 -type d -regex "^/proc/[0-9]+"`;
do
     PID=`echo $DIR | cut -d / -f 3`
     PROGNAME=`ps -p $PID -o comm --no-headers`

     for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk '{ print $2 }'`
     do
         let SUM=$SUM+$SWAP
     done

     if (( $SUM > 0 ));
     then
         echo -n ".";
         echo -e "${PID}\t${SUM}\t${PROGNAME}" >> ${TMP}/${SCRIPT_NAME}.pid;
         echo -e "${SUM}\t${PID}\t${PROGNAME}" >> ${TMP}/${SCRIPT_NAME}.kb;
         echo -e "${PROGNAME}\t${SUM}\t${PID}" >> 
${TMP}/${SCRIPT_NAME}.name;
     fi
     let OVERALL=$OVERALL+$SUM
     SUM=0
done
echo "${OVERALL}" > ${TMP}/${SCRIPT_NAME}.overal;
echo;
echo "Overall swap used: ${OVERALL} kB";
echo "========================================";
case "${SORT}" in
     name )
         echo -e "name\tkB\tpid";
         echo "========================================";
         cat ${TMP}/${SCRIPT_NAME}.name|sort -r;
         ;;

     kb )
         echo -e "kB\tpid\tname";
         echo "========================================";
         cat ${TMP}/${SCRIPT_NAME}.kb|sort -rh;
         ;;

     pid | * )
         echo -e "pid\tkB\tname";
         echo "========================================";
         cat ${TMP}/${SCRIPT_NAME}.pid|sort -rh;
         ;;
esac
rm -fR "${TMP}/";



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-04-03 20:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-20  0:55 Workload spikes on KVM host when doing IO on a guest Erik Brakkee
2012-05-20 13:54 ` Avi Kivity
     [not found]   ` <4FB923BB.9070306@brakkee.org>
     [not found]     ` <4FB924D0.1020704@redhat.com>
     [not found]       ` <4FB929DE.8000500@brakkee.org>
2012-05-20 17:49         ` Avi Kivity
2013-04-03 20:19           ` Erik Brakkee

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).