From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:32997) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFLzm-00039T-IV for qemu-devel@nongnu.org; Wed, 15 Jul 2015 08:41:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFLzh-00072b-Nm for qemu-devel@nongnu.org; Wed, 15 Jul 2015 08:41:06 -0400 Received: from e18.ny.us.ibm.com ([129.33.205.208]:56629) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFLzh-000720-Il for qemu-devel@nongnu.org; Wed, 15 Jul 2015 08:41:01 -0400 Received: from /spool/local by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 15 Jul 2015 08:41:00 -0400 Received: from b01cxnp22033.gho.pok.ibm.com (b01cxnp22033.gho.pok.ibm.com [9.57.198.23]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 037FCC90042 for ; Wed, 15 Jul 2015 08:32:05 -0400 (EDT) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by b01cxnp22033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t6FCewCK50724984 for ; Wed, 15 Jul 2015 12:40:58 GMT Received: from d01av02.pok.ibm.com (localhost [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t6FCew1r015247 for ; Wed, 15 Jul 2015 08:40:58 -0400 Message-ID: <55A654D6.5000906@linux.vnet.ibm.com> Date: Wed, 15 Jul 2015 08:40:54 -0400 From: "Jason J. Herne" MIME-Version: 1.0 References: <1435855010-30882-1-git-send-email-jjherne@linux.vnet.ibm.com> <1435855010-30882-2-git-send-email-jjherne@linux.vnet.ibm.com> <55956A2E.4020806@redhat.com> <55A3CEAF.6030504@linux.vnet.ibm.com> <55A3D5D8.7070902@redhat.com> In-Reply-To: <55A3D5D8.7070902@redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v4 1/5] cpu: Provide vcpu throttling interface Reply-To: jjherne@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , afaerber@suse.de, amit.shah@redhat.com, dgilbert@redhat.com, borntraeger@de.ibm.com, quintela@redhat.com, qemu-devel@nongnu.org On 07/13/2015 11:14 AM, Paolo Bonzini wrote: > > > On 13/07/2015 16:43, Jason J. Herne wrote: >>>> >>>> + CPU_FOREACH(cpu) { >>>> + async_run_on_cpu(cpu, cpu_throttle_thread, NULL); >>>> + } >>>> + >>>> + timer_mod(throttle_timer, >>>> qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) + >>>> + CPU_THROTTLE_TIMESLICE); >>>> +} >>> >>> This could cause callbacks to pile up I think. David, do you have any >>> idea how to fix it? >> >> I'm not sure how callbacks can pile up here. If the vcpus are running >> then their thread's will execute the callbacks. If they are not running >> then the use of QEMU_CLOCK_VIRTUAL_RT will prevent the callbacks from >> stacking because the timer is not running, right? > > Couldn't the iothread starve the VCPUs? They need to take the iothread > lock in order to process the callbacks. > Yes, I can see the possibility here. I'm not sure what to do about it though. Maybe this is wishful thinking :) But if the iothread lock cannot be acquired then the cpu cannot run thereby preventing the guest from changing a ton of pages. This will have the effect of indirectly throttling the guest which will allow us to advance to the non-live phase of migration rather quickly. And again, if we are starving on the iothread lock then the guest vcpus are not executing and QEMU_CLOCK_VIRTUAL_RT is not ticking, right? This will also limit the number of stacked callbacks to a very low number. Unless I've missing something? -- -- Jason J. Herne (jjherne@linux.vnet.ibm.com)