From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36560) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEmeD-0006i2-78 for qemu-devel@nongnu.org; Mon, 13 Jul 2015 18:56:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZEme9-00026s-QN for qemu-devel@nongnu.org; Mon, 13 Jul 2015 18:56:29 -0400 Received: from mail-wi0-x235.google.com ([2a00:1450:400c:c05::235]:37832) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEme9-00024X-JK for qemu-devel@nongnu.org; Mon, 13 Jul 2015 18:56:25 -0400 Received: by wibud3 with SMTP id ud3so8595425wib.0 for ; Mon, 13 Jul 2015 15:56:23 -0700 (PDT) Sender: Paolo Bonzini References: <1436544486-31169-1-git-send-email-fred.konrad@greensocs.com> <1436544486-31169-4-git-send-email-fred.konrad@greensocs.com> <87r3ocjagd.fsf@linaro.org> From: Paolo Bonzini Message-ID: <55A44214.9040004@redhat.com> Date: Tue, 14 Jul 2015 00:56:20 +0200 MIME-Version: 1.0 In-Reply-To: <87r3ocjagd.fsf@linaro.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC PATCH V2 3/3] cpus: introduce async_run_safe_work_on_cpu. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?UTF-8?Q?Alex_Benn=c3=a9e?= , fred.konrad@greensocs.com Cc: mttcg@greensocs.com, guillaume.delbergue@greensocs.com, mark.burton@greensocs.com, qemu-devel@nongnu.org, a.rigo@virtualopensystems.com On 13/07/2015 18:20, Alex Bennée wrote: >> +static void qemu_cpu_kick_thread(CPUState *cpu) >> +{ >> +#ifndef _WIN32 >> + int err; >> + >> + err = pthread_kill(cpu->thread->thread, SIG_IPI); >> + if (err) { >> + fprintf(stderr, "qemu:%s: %s", __func__, strerror(err)); >> + exit(1); >> + } >> +#else /* _WIN32 */ >> + if (!qemu_cpu_is_self(cpu)) { >> + CONTEXT tcgContext; >> + >> + if (SuspendThread(cpu->hThread) == (DWORD)-1) { >> + fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__, >> + GetLastError()); >> + exit(1); >> + } >> + >> + /* On multi-core systems, we are not sure that the thread is actually >> + * suspended until we can get the context. >> + */ >> + tcgContext.ContextFlags = CONTEXT_CONTROL; >> + while (GetThreadContext(cpu->hThread, &tcgContext) != 0) { >> + continue; >> + } >> + >> + cpu_signal(0); >> + >> + if (ResumeThread(cpu->hThread) == (DWORD)-1) { >> + fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__, >> + GetLastError()); >> + exit(1); >> + } >> + } >> +#endif > > I'm going to go out on a limb and guess these sort of implementation > specifics should be in the posix/win32 utility files, that is unless > Glib abstracts enough of it for us. As you found later, this part of the patch is just moving code around. However, getting ultimately rid of this is basically the reason why I'm interested in MTTCG. :) >> +} >> + >> void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data) >> { >> struct qemu_work_item wi; >> @@ -894,6 +933,76 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data) >> qemu_cpu_kick(cpu); >> } >> >> +void async_run_safe_work_on_cpu(CPUState *cpu, void (*func)(void *data), >> + void *data) >> +{ >> + struct qemu_work_item *wi; >> + >> + wi = g_malloc0(sizeof(struct qemu_work_item)); >> + wi->func = func; >> + wi->data = data; > > Is there anything that prevents the user calling the function with the > same payload for multiple CPUs? Why? The data could be read only, or could be known to outlive the CPUs. >> + wi->free = true; >> + >> + qemu_mutex_lock(&cpu->work_mutex); >> + if (cpu->queued_safe_work_first == NULL) { >> + cpu->queued_safe_work_first = wi; >> + } else { >> + cpu->queued_safe_work_last->next = wi; >> + } >> + cpu->queued_safe_work_last = wi; > > I'm surprised we haven't added some helpers to the qemu_work_queue API > for all this identical boilerplate but whatever... Yes, it should be using QSIMPLEQ. Can be done later. It is indeed reasonable, but it should be used with great care. Paolo