From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rik van Riel Subject: Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt) functionality. Date: Fri, 14 Jan 2011 13:29:52 -0500 Message-ID: <4D309620.60507@redhat.com> References: <20110114030209.53765a0a@annuminas.surriel.com> <20110114030357.03c3060a@annuminas.surriel.com> <20110114174741.GB28632@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Avi Kiviti , Peter Zijlstra , Mike Galbraith , Chris Wright , ttracy@redhat.com, dshaks@redhat.com To: vatsa@linux.vnet.ibm.com Return-path: In-Reply-To: <20110114174741.GB28632@linux.vnet.ibm.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 01/14/2011 12:47 PM, Srivatsa Vaddagiri wrote: > If I recall correctly, one of the motivations for yield_to_task (rather than > a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want > the remaining timeslice of spinning vcpu to be given away to other guests but > rather donate it to another (lock-holding) vcpu and thus retain the bandwidth > allocated to the guest. No, that was not the motivation. The motivation was to try and get the lock holder to run soon, so it can release the lock. What you describe is merely one of the mechanisms considered for meeting that objective. > I am not sure whether we are meeting that objective via this patch, as > lock-spinning vcpu would simply yield after setting next buddy to preferred > vcpu on target pcpu, thereby leaking some amount of bandwidth on the pcpu > where it is spinning. Have you read the patch? -- All rights reversed