From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:44648) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5HkR-0001YK-Bl for qemu-devel@nongnu.org; Wed, 07 Mar 2012 09:21:48 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S5HkK-0000TS-SV for qemu-devel@nongnu.org; Wed, 07 Mar 2012 09:21:46 -0500 Received: from blade4.isti.cnr.it ([194.119.192.20]:2311) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5HkK-0000TI-LU for qemu-devel@nongnu.org; Wed, 07 Mar 2012 09:21:40 -0500 Received: from [192.168.7.52] ([155.253.6.254]) by mx.isti.cnr.it (PMDF V6.5-x6 #31988) with ESMTPSA id <01OCU7KYDTX8KVORKI@mx.isti.cnr.it> for qemu-devel@nongnu.org; Wed, 07 Mar 2012 15:21:22 +0100 (MET) Date: Wed, 07 Mar 2012 15:21:48 +0100 From: Reeted In-reply-to: Message-id: <4F576EFC.4040205@shiftmail.org> MIME-version: 1.0 Content-type: text/plain; format=flowed; charset=ISO-8859-1 Content-transfer-encoding: 7bit References: <20120210143639.GA17883@gmail.com> <4F54E620.8060400@tuxadero.com> <4F54ED84.7030601@tuxadero.com> <4F568AAC.6060206@shiftmail.org> Subject: Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Martin Mailand , Dongsu Park , kvm@vger.kernel.org, qemu-devel@nongnu.org On 03/07/12 09:04, Stefan Hajnoczi wrote: > On Tue, Mar 6, 2012 at 10:07 PM, Reeted wrote: >> On 03/06/12 13:59, Stefan Hajnoczi wrote: >>> BTW, I'll take the opportunity to say that 15.8 or 20.3 k IOPS are very low >>> figures compared to what I'd instinctively expect from a paravirtualized >>> block driver. >>> There are now PCIe SSD cards that do 240 k IOPS (e.g. "OCZ RevoDrive 3 x2 >>> max iops") which is 12-15 times higher, for something that has to go through >>> a real driver and a real PCI-express bus, and can't use zero-copy >>> techniques. >>> The IOPS we can give to a VM is currently less than half that of a single >>> SSD SATA drive (60 k IOPS or so, these days). >>> That's why I consider this topic of virtio-blk performances very important. >>> I hope there can be improvements in this sector... > It depends on the benchmark configuration. virtio-blk is capable of > doing 100,000s of iops, I've seen results. My guess is that you can > do>100,000 read iops with virtio-blk on a good machine and stock > qemu-kvm. It's very difficult to configure, then. I also did benchmarks in the past, and I can confirm Martin and Dongsu findings of about 15 k IOPS with: qemu-kvm 0.14.1, Intel Westmere CPU, virtio-blk (kernel 2.6.38 on the guest, 3.0 on the host), fio, 4k random *reads* from the Host page cache (backend LVM device was fully in cache on the Host), writeback setting, cache dropped on the guest prior to benchmark (and insufficient guest memory to cache a significant portion of the device). If you can teach us how to reach 100 k IOPS, I think everyone would be grateful :-)