From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: KVM performance vs. Xen Date: Thu, 30 Apr 2009 08:45:08 -0500 Message-ID: <49F9AB64.20506@codemonkey.ws> References: <49F8672E.5080507@linux.vnet.ibm.com> <49F967AE.4040905@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Andrew Theurer , kvm-devel To: Avi Kivity Return-path: Received: from qw-out-2122.google.com ([74.125.92.25]:39597 "EHLO qw-out-2122.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1762098AbZD3NpN (ORCPT ); Thu, 30 Apr 2009 09:45:13 -0400 Received: by qw-out-2122.google.com with SMTP id 5so1541821qwd.37 for ; Thu, 30 Apr 2009 06:45:12 -0700 (PDT) In-Reply-To: <49F967AE.4040905@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Avi Kivity wrote: >> >> 1) I'm seeing about 2.3% in scheduler functions [that I recognize]. >> Does that seems a bit excessive? > > Yes, it is. If there is a lot of I/O, this might be due to the thread > pool used for I/O. This is why I wrote the linux-aio patch. It only reduced CPU consumption by about 2% although I'm not sure if that's absolute or relative. Andrew? >> 2) cpu_physical_memory_rw due to not using preadv/pwritev? > > I think both virtio-net and virtio-blk use memcpy(). With latest linux-2.6, and a development snapshot of glibc, virtio-blk will not use memcpy() anymore but virtio-net still does on the receive path (but not transmit). Regards, Anthony Liguori