From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60504) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZHDnt-0001sJ-WA for qemu-devel@nongnu.org; Mon, 20 Jul 2015 12:20:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZHDno-0007P9-Fg for qemu-devel@nongnu.org; Mon, 20 Jul 2015 12:20:33 -0400 Received: from mail-wi0-f169.google.com ([209.85.212.169]:34255) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZHDno-0007OZ-4s for qemu-devel@nongnu.org; Mon, 20 Jul 2015 12:20:28 -0400 Received: by wibud3 with SMTP id ud3so91979632wib.1 for ; Mon, 20 Jul 2015 09:20:27 -0700 (PDT) References: <1437144337-21442-1-git-send-email-fred.konrad@greensocs.com> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <1437144337-21442-1-git-send-email-fred.konrad@greensocs.com> Date: Mon, 20 Jul 2015 17:20:25 +0100 Message-ID: <87io9eu7g6.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: fred.konrad@greensocs.com Cc: mttcg@listserver.greensocs.com, mark.burton@greensocs.com, qemu-devel@nongnu.org, a.rigo@virtualopensystems.com, guillaume.delbergue@greensocs.com, pbonzini@redhat.com fred.konrad@greensocs.com writes: > From: KONRAD Frederic > > This is the async_safe_work introduction bit of the Multithread TCG work. > Rebased on current upstream (6169b60285fe1ff730d840a49527e721bfb30899). > > (Currently untested as I need to rebase MTTCG first.) Wouldn't it make sense for this to be re-based onto the current rc independent of MTTCG and then have those patches based ontop of this series? (see other mail). > > It can be cloned here: > http://git.greensocs.com/fkonrad/mttcg.git branch async_work_v3 Not seeing this at the moment, can you re-push please? > > The first patch introduces a mutex to protect the existing queued_work_* > CPUState members against multiple (concurent) access. > > The second patch introduces a tcg_exec_flag which will be 1 when we are inside > cpu_exec(), -1 when we must not enter the cpu execution and 0 when we are > allowed to do so. This is required as safe work need to be sure that's all vCPU > are outside cpu_exec(). > > The last patch introduces async_safe_work. It allows to add some work which will > be done asynchronously but only when all vCPUs are outside cpu_exec(). The tcg > thread will wait that no vCPUs have any pending safe work before reentering > cpu-exec(). > > Changes V2 -> V3: > * Check atomically we are not in the executiong loop to fix the race condition > which might happen. > Changes V1 -> V2: > * Release the lock while running the callback for both async and safe work. > > KONRAD Frederic (3): > cpus: protect queued_work_* with work_mutex. > cpus: add tcg_exec_flag. > cpus: introduce async_run_safe_work_on_cpu. > > cpu-exec.c | 10 ++++ > cpus.c | 160 ++++++++++++++++++++++++++++++++++++++++-------------- > include/qom/cpu.h | 57 +++++++++++++++++++ > qom/cpu.c | 20 +++++++ > 4 files changed, 207 insertions(+), 40 deletions(-) -- Alex Bennée