From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=46148 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PNsSV-0003S0-TP for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:35:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PNsSU-0002F5-F0 for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:35:19 -0500 Received: from casper.infradead.org ([85.118.1.10]:38119) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PNsSU-0002Ew-9W for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:35:18 -0500 From: Peter Zijlstra In-Reply-To: <4CF6A0E4.1050108@redhat.com> References: <1290530963-3448-1-git-send-email-aliguori@us.ibm.com> <4CECCA39.4060702@redhat.com> <4CED1A23.9030607@linux.vnet.ibm.com> <4CED1FD3.1000801@redhat.com> <20101201123742.GA3780@linux.vnet.ibm.com> <4CF6460C.5070604@redhat.com> <20101201161221.GA8073@linux.vnet.ibm.com> <1291220718.32004.1696.camel@laptop> <20101201171758.GA8514@sequoia.sous-sol.org> <1291224176.32004.1763.camel@laptop> <4CF6854C.4020500@redhat.com> <1291230476.32004.1922.camel@laptop> <4CF6A0E4.1050108@redhat.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Date: Wed, 01 Dec 2010 20:35:36 +0100 Message-ID: <1291232136.32004.1964.camel@laptop> Mime-Version: 1.0 Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2) List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Rik van Riel Cc: kvm@vger.kernel.org, Mike Galbraith , qemu-devel@nongnu.org, vatsa@linux.vnet.ibm.com, Chris Wright , Anthony Liguori , Kivity , Avi@gnu.org On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote: > On 12/01/2010 02:07 PM, Peter Zijlstra wrote: > > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote: > >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote: >=20 > >> The pause loop exiting& directed yield patches I am working on > >> preserve inter-vcpu fairness by round robining among the vcpus > >> inside one KVM guest. > > > > I don't necessarily think that's enough. > > > > Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning. > > They'll end up all three donating some time to the 4th. > > > > The only way to make that fair again is if due to future contention the > > 4th cpu donates an equal amount of time back to the resp. cpus it got > > time from. Guest lock patterns and host scheduling don't provide this > > guarantee. >=20 > You have no guarantees when running virtualized, guest > CPU time could be taken away by another guest just as > easily as by another VCPU. >=20 > Even if we equalized the amount of CPU time each VCPU > ends up getting across some time interval, that is no > guarantee they get useful work done, or that the time > gets fairly divided to _user processes_ running inside > the guest. Right, and Jeremy was working on making the guest load-balancer aware of that so the user-space should get fairly scheduled on service (of course, that's assuming you run a linux guest with that logic in). > The VCPU could be running something lock-happy when > it temporarily gives up the CPU, and get extra CPU time > back when running something userspace intensive. >=20 > In-between, it may well have scheduled to another task > (allowing it to get more CPU time). >=20 > I'm not convinced the kind of fairness you suggest is > possible or useful. Well, physical cpus get equal service, but yeah, time loss due to contention could probably be talked equivalent to do non-equal service in the vcpu case. Anyway, don't take it as a critique per-se, your approach sounds like the sanest proposal yet.