From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: virtio-blk performance regression and qemu-kvm Date: Wed, 07 Mar 2012 12:21:45 +0100 Message-ID: <4F5744C9.8010308@redhat.com> References: <20120210143639.GA17883@gmail.com> <4F54E620.8060400@tuxadero.com> <4F54ED84.7030601@tuxadero.com> <4F573AFE.40204@tuxadero.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Stefan Hajnoczi , reeted@shiftmail.org, qemu-devel@nongnu.org, kvm@vger.kernel.org To: Martin Mailand Return-path: In-Reply-To: <4F573AFE.40204@tuxadero.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Sender: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org List-Id: kvm.vger.kernel.org Il 07/03/2012 11:39, Martin Mailand ha scritto: > The host disk can at max. 13K iops, in qemu I get at max 6,5K iops, > that's around about 50% overhead. All the test were with 4k reads, so I > think we are mostly latency bound. For latency tests, running without ioeventfd could give slightly better results (-global virtio-blk-pci.ioeventfd=off). Paolo