From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43903) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZeEW-0002Oo-8V for qemu-devel@nongnu.org; Wed, 09 Sep 2015 08:12:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZZeEP-0001mI-9e for qemu-devel@nongnu.org; Wed, 09 Sep 2015 08:12:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59452) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZeEP-0001m2-4p for qemu-devel@nongnu.org; Wed, 09 Sep 2015 08:12:05 -0400 References: <1441732357-11861-1-git-send-email-jjherne@linux.vnet.ibm.com> <1441732357-11861-2-git-send-email-jjherne@linux.vnet.ibm.com> <87egi77u3f.fsf@neno.neno> <55F00F83.2080708@redhat.com> <87a8sv6emo.fsf@neno.neno> From: Paolo Bonzini Message-ID: <55F02208.5070205@redhat.com> Date: Wed, 9 Sep 2015 14:11:52 +0200 MIME-Version: 1.0 In-Reply-To: <87a8sv6emo.fsf@neno.neno> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH v7 1/5] cpu: Provide vcpu throttling interface List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: quintela@redhat.com Cc: qemu-devel@nongnu.org, dgilbert@redhat.com, borntraeger@de.ibm.com, "Jason J. Herne" , amit.shah@redhat.com, afaerber@suse.de On 09/09/2015 13:01, Juan Quintela wrote: > Paolo Bonzini wrote: >> On 09/09/2015 12:41, Juan Quintela wrote: >>>>> + qemu_mutex_unlock_iothread(); >>>>> + atomic_set(&cpu->throttle_thread_scheduled, 0); >>>>> + g_usleep(sleeptime_ns / 1000); /* Convert ns to us for usleep = call */ >>>>> + qemu_mutex_lock_iothread(); >>> >>> Why is this thread safe? >>> >>> qemu_mutex_lock_iothread() is protecting (at least) cpu_work_first on >>> each cpu. How can we be sure that _nothing_ will change that while w= e >>> are waiting? >> >> You only have to be sure that the queued work list remains consistent; >> not that nothing changes. >=20 >=20 > But nothing else is protected by the iothread? Not at this point. Notice how qemu_kvm_wait_io_event calls qemu_cond_wait just before qemu_wait_io_event_common (which in turn is what calls flush_queued_work). So you can be quite sure that qemu_wait_io_event_common runs at a point where there's nothing hidden that relies on the iothread mutex. Paolo