From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:48031) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Runna-0004qH-Da for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:21:46 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RunnR-0006Mh-Ev for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:21:42 -0500 Received: from mail-pw0-f45.google.com ([209.85.160.45]:46561) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RunnR-0006Ma-5N for qemu-devel@nongnu.org; Tue, 07 Feb 2012 11:21:33 -0500 Received: by pbaa11 with SMTP id a11so8053326pba.4 for ; Tue, 07 Feb 2012 08:21:32 -0800 (PST) Message-ID: <4F314F87.60807@codemonkey.ws> Date: Tue, 07 Feb 2012 10:21:27 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <4F2AB552.2070909@redhat.com> <4F2E80A7.5040908@redhat.com> <4F3025FB.1070802@codemonkey.ws> <4F31132F.3010100@redhat.com> <4F31408F.80901@codemonkey.ws> <4F314B2A.4000709@redhat.com> <4F314EEE.8080401@siemens.com> In-Reply-To: <4F314EEE.8080401@siemens.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] Next gen kvm api List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: linux-kernel , Rob Earhart , Avi Kivity , KVM list , qemu-devel On 02/07/2012 10:18 AM, Jan Kiszka wrote: > On 2012-02-07 17:02, Avi Kivity wrote: >> On 02/07/2012 05:17 PM, Anthony Liguori wrote: >>> On 02/07/2012 06:03 AM, Avi Kivity wrote: >>>> On 02/06/2012 09:11 PM, Anthony Liguori wrote: >>>>> >>>>> I'm not so sure. ioeventfds and a future mmio-over-socketpair have >>>>> to put the >>>>> kthread to sleep while it waits for the other end to process it. >>>>> This is >>>>> effectively equivalent to a heavy weight exit. The difference in >>>>> cost is >>>>> dropping to userspace which is really neglible these days (< 100 >>>>> cycles). >>>> >>>> On what machine did you measure these wonderful numbers? >>> >>> A syscall is what I mean by "dropping to userspace", not the cost of a >>> heavy weight exit. >> >> Ah. But then ioeventfd has that as well, unless the other end is in the >> kernel too. >> >>> I think a heavy weight exit is still around a few thousand cycles. >>> >>> Any nehalem class or better processor should have a syscall cost of >>> around that unless I'm wildly mistaken. >>> >> >> That's what I remember too. >> >>>> >>>> But I agree a heavyweight exit is probably faster than a double >>>> context switch >>>> on a remote core. >>> >>> I meant, if you already need to take a heavyweight exit (and you do to >>> schedule something else on the core), than the only additional cost is >>> taking a syscall return to userspace *first* before scheduling another >>> process. That overhead is pretty low. >> >> Yeah. >> > > Isn't there another level in between just scheduling and full syscall > return if the user return notifier has some real work to do? Depends on whether you're scheduling a kthread or a userspace process, no? If you're eventually going to end up in userspace, you have to do the full heavy weight exit. If you're scheduling to a kthread, it's better to do the type of trickery that ioeventfd does and just turn it into a function call. Regards, Anthony Liguori > > Jan >