From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=52529 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1POAb9-00010G-Eg for qemu-devel@nongnu.org; Thu, 02 Dec 2010 09:57:45 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PO5Iu-0006lU-Aw for qemu-devel@nongnu.org; Thu, 02 Dec 2010 04:18:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59599) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PO5Iu-0006lM-3o for qemu-devel@nongnu.org; Thu, 02 Dec 2010 04:18:16 -0500 Message-ID: <4CF76440.30500@redhat.com> Date: Thu, 02 Dec 2010 11:17:52 +0200 From: Avi Kivity MIME-Version: 1.0 References: <1290530963-3448-1-git-send-email-aliguori@us.ibm.com> <4CECCA39.4060702@redhat.com> <4CED1A23.9030607@linux.vnet.ibm.com> <4CED1FD3.1000801@redhat.com> <20101201123742.GA3780@linux.vnet.ibm.com> <4CF6460C.5070604@redhat.com> <20101201161221.GA8073@linux.vnet.ibm.com> <1291220718.32004.1696.camel@laptop> <20101201172953.GF8073@linux.vnet.ibm.com> <1291225502.32004.1787.camel@laptop> <20101201180040.GH8073@linux.vnet.ibm.com> <1291230582.32004.1927.camel@laptop> In-Reply-To: <1291230582.32004.1927.camel@laptop> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2) List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Zijlstra Cc: kvm@vger.kernel.org, Mike Galbraith , vatsa@linux.vnet.ibm.com, qemu-devel@nongnu.org, Chris Wright , Anthony Liguori On 12/01/2010 09:09 PM, Peter Zijlstra wrote: > > > > We are dealing with just one task here (the task that is yielding). > > After recording how much timeslice we are "giving up" in current->donate_time > > (donate_time is perhaps not the right name to use), we adjust the yielding > > task's vruntime as per existing logic (for ex: to make it go to back of > > runqueue). When the yielding tasks gets to run again, lock is hopefully > > available for it to grab, we let it run longer than the default sched_slice() > > to compensate for what time it gave up previously to other threads in same > > runqueue. This ensures that because of yielding upon lock contention, we are not > > leaking bandwidth in favor of other guests. Again I don't know how much of > > fairness issue this is in practice, so unless we see some numbers I'd prefer > > sticking to plain yield() upon lock-contention (for unmodified guests that is). > > No, that won't work. Once you've given up time you cannot add it back > without destroying fairness. > > You can limit the unfairness by limiting the amount of feedback, but I > really dislike such 'yield' semantics. Agreed. What I'd like to see in directed yield is donating exactly the amount of vruntime that's needed to make the target thread run. The donating thread won't get its vruntime back, unless the other thread hits contention itself and does a directed yield back. So even if your lock is ping-ponged around, the guest doesn't lose vruntime compared to other processes on the host. -- error compiling committee.c: too many arguments to function