From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rik van Riel Subject: Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2) Date: Wed, 01 Dec 2010 14:24:20 -0500 Message-ID: <4CF6A0E4.1050108@redhat.com> References: <1290530963-3448-1-git-send-email-aliguori@us.ibm.com> <4CECCA39.4060702@redhat.com> <4CED1A23.9030607@linux.vnet.ibm.com> <4CED1FD3.1000801@redhat.com> <20101201123742.GA3780@linux.vnet.ibm.com> <4CF6460C.5070604@redhat.com> <20101201161221.GA8073@linux.vnet.ibm.com> <1291220718.32004.1696.camel@laptop> <20101201171758.GA8514@sequoia.sous-sol.org> <1291224176.32004.1763.camel@laptop> <4CF6854C.4020500@redhat.com> <1291230476.32004.1922.camel@laptop> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Chris Wright , vatsa@linux.vnet.ibm.com, Avi Kivity , Anthony Liguori , qemu-devel@nongnu.org, kvm@vger.kernel.org, Ingo Molnar , Mike Galbraith To: Peter Zijlstra Return-path: Received: from mx1.redhat.com ([209.132.183.28]:44586 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755642Ab0LATYx (ORCPT ); Wed, 1 Dec 2010 14:24:53 -0500 In-Reply-To: <1291230476.32004.1922.camel@laptop> Sender: kvm-owner@vger.kernel.org List-ID: On 12/01/2010 02:07 PM, Peter Zijlstra wrote: > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote: >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote: >> The pause loop exiting& directed yield patches I am working on >> preserve inter-vcpu fairness by round robining among the vcpus >> inside one KVM guest. > > I don't necessarily think that's enough. > > Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning. > They'll end up all three donating some time to the 4th. > > The only way to make that fair again is if due to future contention the > 4th cpu donates an equal amount of time back to the resp. cpus it got > time from. Guest lock patterns and host scheduling don't provide this > guarantee. You have no guarantees when running virtualized, guest CPU time could be taken away by another guest just as easily as by another VCPU. Even if we equalized the amount of CPU time each VCPU ends up getting across some time interval, that is no guarantee they get useful work done, or that the time gets fairly divided to _user processes_ running inside the guest. The VCPU could be running something lock-happy when it temporarily gives up the CPU, and get extra CPU time back when running something userspace intensive. In-between, it may well have scheduled to another task (allowing it to get more CPU time). I'm not convinced the kind of fairness you suggest is possible or useful. -- All rights reversed