From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39427) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z8WPr-0003Mc-AM for qemu-devel@nongnu.org; Fri, 26 Jun 2015 12:23:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z8WPo-0006j9-46 for qemu-devel@nongnu.org; Fri, 26 Jun 2015 12:23:47 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57759) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z8WPn-0006iv-VI for qemu-devel@nongnu.org; Fri, 26 Jun 2015 12:23:44 -0400 References: <1435330053-18733-1-git-send-email-fred.konrad@greensocs.com> <1435330053-18733-14-git-send-email-fred.konrad@greensocs.com> <558D7155.7040101@redhat.com> <558D7937.4090602@greensocs.com> From: Paolo Bonzini Message-ID: <558D7C88.9010509@redhat.com> Date: Fri, 26 Jun 2015 18:23:36 +0200 MIME-Version: 1.0 In-Reply-To: <558D7937.4090602@greensocs.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH V6 13/18] cpu: introduce async_run_safe_work_on_cpu. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Frederic Konrad , qemu-devel@nongnu.org, mttcg@greensocs.com Cc: peter.maydell@linaro.org, a.spyridakis@virtualopensystems.com, mark.burton@greensocs.com, agraf@suse.de, alistair.francis@xilinx.com, guillaume.delbergue@greensocs.com, alex.bennee@linaro.org On 26/06/2015 18:09, Frederic Konrad wrote: >>> >>> +void async_run_safe_work_on_cpu(CPUState *cpu, void (*func)(void >>> *data), >>> + void *data) >>> +{ >> Do you need a mutex to protect this data structure? I would use one >> even if not strictly necessary, to avoid introducing new BQL-protected >> structures. > > For the moment it's called by tb_invalidate and tb_flush_safe the second > lacks a > tb_lock/unlock which should be added. I don't need an other mutex expect > if this is > used elsewhere? In any case, the locking policy should be documented. At which point you realize that protecting a CPU's queued_safe_work_{first,next} fields with the tb_lock is a bit weird. :) I would add a mutex inside CPUState, and then later we could also use it for regular run_on_cpu/async_run_on_cpu. Paolo