From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ross Becker Subject: Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm Date: Thu, 08 Mar 2012 15:56:03 -0800 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit To: Return-path: Received: from mail-tul01m020-f174.google.com ([209.85.214.174]:58760 "EHLO mail-tul01m020-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753322Ab2CHX4H (ORCPT ); Thu, 8 Mar 2012 18:56:07 -0500 Received: by obbuo6 with SMTP id uo6so1335637obb.19 for ; Thu, 08 Mar 2012 15:56:07 -0800 (PST) Sender: kvm-owner@vger.kernel.org List-ID: I just joined in order to chime in here- I'm seeing the exact same thing as Reeted; I've got a machine with a storage subsystem capable of 400k IOPs, and when I punch the storage up to VMs, each VM seems to top out at around 15-20k IOPs. I've managed to get to 115k IOPs by creating 8 VMs, doing appropriate CPU pinning to spread them amongst physical cores, and running IO in them simultaneously, but I'm unable to get a single VM past 20k IOPs. I'm using kvm-qemu 12.1.2, as distributed in RHEL 6.2. The hardware is a Dell R910 chassis, with 4 intel E7 processors. I am poking LVM logical volume block devices directly up to VMs as disks, format raw, virtio driver, write caching none, IO mode native. Each VM has 4 vCPUs. I'm also using fio to do my testing. The interesting thing is that throughput is actually pretty fantastic; I'm able to push 6.3 GB/sec using 256k blocks, but the IOPs @ 4k block size are poor. I am happy to provide any config details, or try any tests suggested. --Ross