From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:36724) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5EIM-0002w0-Az for qemu-devel@nongnu.org; Wed, 07 Mar 2012 05:40:39 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S5EHy-0000Rg-Ba for qemu-devel@nongnu.org; Wed, 07 Mar 2012 05:40:33 -0500 Received: from einhorn.in-berlin.de ([192.109.42.8]:49406) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5EHy-0000Qe-1n for qemu-devel@nongnu.org; Wed, 07 Mar 2012 05:40:10 -0500 Message-ID: <4F573AFE.40204@tuxadero.com> Date: Wed, 07 Mar 2012 11:39:58 +0100 From: Martin Mailand MIME-Version: 1.0 References: <20120210143639.GA17883@gmail.com> <4F54E620.8060400@tuxadero.com> <4F54ED84.7030601@tuxadero.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: reeted@shiftmail.org, qemu-devel@nongnu.org, kvm@vger.kernel.org Am 06.03.2012 13:59, schrieb Stefan Hajnoczi: > Yes, the reason why that would be interesting is because it allows us > to put the performance gain with master+"performance" into > perspective. We could see how much of a change we get. > > Does the CPU governor also affect the result when you benchmark with > real disks instead of ramdisk? I can see how the governor would > affect ramdisk, but would expect real disk I/O to be impacted much > less. Hi, here my results. I tested with "fio -name iops -rw=read -size=1G -iodepth 1 -filename /dev/vdb -ioengine libaio -direct 1 -bs 4k" The qemu command was. qemu-system-x86_64 --enable-kvm -m 512 -boot c \ -drive file=/home/martin/vmware/bisect_kvm/hda.img,cache=none,if=virtio -drive file=/dev/ram0,cache=none,if=virtio -drive file=/dev/sda2,cache=none,if=virtio Host Kernel 3.3.0+rc4 Guest Kernel 3.0.0-16-generic ubuntu kernel On the host I use a raw partition sda2 for the disk test, in qemu I write with fio to /dev/vdc, though there is no fs involved. The host disk can at max. 13K iops, in qemu I get at max 6,5K iops, that's around about 50% overhead. All the test were with 4k reads, so I think we are mostly latency bound. -martin log: ** v0.14.1 ondemand ** ram bw=61038KB/s iops=15259 bw=66190KB/s iops=16547 disk bw=18105KB/s iops=4526 bw=17625KB/s iops=4406 ** v0.14.1 performance ** ram bw=72356KB/s iops=18088 bw=72390KB/s iops=18097 disk bw=27886KB/s iops=6971 bw=27915KB/s iops=6978 ** master ondemand ** ram bw=24833KB/s iops=6208 bw=27275KB/s iops=6818 disk bw=14980KB/s iops=3745 bw=14881KB/s iops=3720 ** master performance ** ram bw=64318KB/s iops=16079 bw=63523KB/s iops=15880 disk bw=27043KB/s iops=6760 bw=27211KB/s iops=6802 Host Disk Test (SanDisk SSD U100) host disk ondemand bw=48823KB/s iops=12205 bw=49086KB/s iops=12271 host disk performance bw=55156KB/s iops=13789 bw=54980KB/s iops=13744