From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=42555 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PNsaL-0007Dn-At for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:43:26 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PNsaJ-0003wV-LG for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:43:25 -0500 Received: from mx1.redhat.com ([209.132.183.28]:9338) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PNsaJ-0003w4-D2 for qemu-devel@nongnu.org; Wed, 01 Dec 2010 14:43:23 -0500 Message-ID: <4CF6A540.9050608@redhat.com> Date: Wed, 01 Dec 2010 14:42:56 -0500 From: Rik van Riel MIME-Version: 1.0 References: <1290530963-3448-1-git-send-email-aliguori@us.ibm.com> <4CECCA39.4060702@redhat.com> <4CED1A23.9030607@linux.vnet.ibm.com> <4CED1FD3.1000801@redhat.com> <20101201123742.GA3780@linux.vnet.ibm.com> <4CF6460C.5070604@redhat.com> <20101201161221.GA8073@linux.vnet.ibm.com> <1291220718.32004.1696.camel@laptop> <20101201171758.GA8514@sequoia.sous-sol.org> <1291224176.32004.1763.camel@laptop> <4CF6854C.4020500@redhat.com> <1291230476.32004.1922.camel@laptop> <4CF6A0E4.1050108@redhat.com> <1291232136.32004.1964.camel@laptop> In-Reply-To: <1291232136.32004.1964.camel@laptop> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2) List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Zijlstra Cc: kvm@vger.kernel.org, Mike Galbraith , qemu-devel@nongnu.org, vatsa@linux.vnet.ibm.com, Chris Wright , Anthony Liguori , Avi Kivity On 12/01/2010 02:35 PM, Peter Zijlstra wrote: > On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote: >> Even if we equalized the amount of CPU time each VCPU >> ends up getting across some time interval, that is no >> guarantee they get useful work done, or that the time >> gets fairly divided to _user processes_ running inside >> the guest. > > Right, and Jeremy was working on making the guest load-balancer aware of > that so the user-space should get fairly scheduled on service (of > course, that's assuming you run a linux guest with that logic in). At that point, you might not need the host side balancing any more, since the guest can move around processes internally (if needed). -- All rights reversed