From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35064) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YIgQO-0007mP-LS for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:34:05 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YIgQK-00039w-Qt for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:34:04 -0500 Received: from mail-wi0-x22c.google.com ([2a00:1450:400c:c05::22c]:33739) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YIgQK-00039k-Hs for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:34:00 -0500 Received: by mail-wi0-f172.google.com with SMTP id h11so25685261wiw.5 for ; Tue, 03 Feb 2015 08:33:59 -0800 (PST) Sender: Paolo Bonzini Message-ID: <54D0F873.2070705@redhat.com> Date: Tue, 03 Feb 2015 17:33:55 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1421428797-23697-1-git-send-email-fred.konrad@greensocs.com> <1421428797-23697-3-git-send-email-fred.konrad@greensocs.com> <54D0F482.9020709@twiddle.net> In-Reply-To: <54D0F482.9020709@twiddle.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC 02/10] use a different translation block list for each cpu. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Richard Henderson , fred.konrad@greensocs.com, qemu-devel@nongnu.org, mttcg@greensocs.com Cc: peter.maydell@linaro.org, mark.burton@greensocs.com, agraf@suse.de, jan.kiszka@siemens.com On 03/02/2015 17:17, Richard Henderson wrote: >> > @@ -759,7 +760,9 @@ static void page_flush_tb_1(int level, void **lp) >> > PageDesc *pd = *lp; >> > >> > for (i = 0; i < V_L2_SIZE; ++i) { >> > - pd[i].first_tb = NULL; >> > + for (j = 0; j < MAX_CPUS; j++) { >> > + pd[i].first_tb[j] = NULL; >> > + } >> > invalidate_page_bitmap(pd + i); >> > } >> > } else { > Surely you've got to do some locking somewhere in order to be able to modify > another thread's cpu tb list. But that's probably not even necessary. page_flush_tb_1 is called from tb_flush, which in turn is only called in very special circumstances. It should be possible to have something like the kernel's "stop_machine" that does the following: 1) schedule a callback on all TCG CPU threads 2) wait for all CPUs to have reached that callback 3) do tb_flush on all CPUs, while it knows they are not holding any lock 4) release all TCG CPU threads With one TCG thread, just use qemu_bh_new (hidden behind a suitable API of course!). Once you have multiple TCG CPU threads, loop on all CPUs with the same run_on_cpu function that KVM uses. Paolo