From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:47817) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Runm5-0003oS-Mj for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:20:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Runlz-00069v-6L for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:20:09 -0500 Received: from mail-pz0-f45.google.com ([209.85.210.45]:35794) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Runly-000699-Qd for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:20:03 -0500 Received: by dadp14 with SMTP id p14so7746374dad.4 for ; Tue, 07 Feb 2012 08:20:01 -0800 (PST) Message-ID: <4F314F2C.4040100@codemonkey.ws> Date: Tue, 07 Feb 2012 10:19:56 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <4F2AB552.2070909@redhat.com> <4F2E80A7.5040908@redhat.com> <4F3025FB.1070802@codemonkey.ws> <4F31132F.3010100@redhat.com> <4F31408F.80901@codemonkey.ws> <4F314B2A.4000709@redhat.com> In-Reply-To: <4F314B2A.4000709@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] Next gen kvm api List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Rob Earhart , linux-kernel , KVM list , qemu-devel On 02/07/2012 10:02 AM, Avi Kivity wrote: > On 02/07/2012 05:17 PM, Anthony Liguori wrote: >> On 02/07/2012 06:03 AM, Avi Kivity wrote: >>> On 02/06/2012 09:11 PM, Anthony Liguori wrote: >>>> >>>> I'm not so sure. ioeventfds and a future mmio-over-socketpair have to put the >>>> kthread to sleep while it waits for the other end to process it. This is >>>> effectively equivalent to a heavy weight exit. The difference in cost is >>>> dropping to userspace which is really neglible these days (< 100 cycles). >>> >>> On what machine did you measure these wonderful numbers? >> >> A syscall is what I mean by "dropping to userspace", not the cost of a heavy >> weight exit. > > Ah. But then ioeventfd has that as well, unless the other end is in the kernel too. Yes, that was my point exactly :-) ioeventfd/mmio-over-socketpair to adifferent thread is not faster than a synchronous KVM_RUN + writing to an eventfd in userspace modulo a couple of cheap syscalls. The exception is when the other end is in the kernel and there is magic optimizations (like there is today with ioeventfd). Regards, Anthony Liguori > >> I think a heavy weight exit is still around a few thousand cycles. >> >> Any nehalem class or better processor should have a syscall cost of around >> that unless I'm wildly mistaken. >> > > That's what I remember too. > >>> >>> But I agree a heavyweight exit is probably faster than a double context switch >>> on a remote core. >> >> I meant, if you already need to take a heavyweight exit (and you do to >> schedule something else on the core), than the only additional cost is taking >> a syscall return to userspace *first* before scheduling another process. That >> overhead is pretty low. > > Yeah. >