From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59342) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YIgA7-0006fB-Rq for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:17:20 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YIgA4-0005zq-IO for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:17:15 -0500 Received: from mail-qa0-x235.google.com ([2607:f8b0:400d:c00::235]:37067) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YIgA4-0005zm-D2 for qemu-devel@nongnu.org; Tue, 03 Feb 2015 11:17:12 -0500 Received: by mail-qa0-f53.google.com with SMTP id n4so34527880qaq.12 for ; Tue, 03 Feb 2015 08:17:12 -0800 (PST) Sender: Richard Henderson Message-ID: <54D0F482.9020709@twiddle.net> Date: Tue, 03 Feb 2015 08:17:06 -0800 From: Richard Henderson MIME-Version: 1.0 References: <1421428797-23697-1-git-send-email-fred.konrad@greensocs.com> <1421428797-23697-3-git-send-email-fred.konrad@greensocs.com> In-Reply-To: <1421428797-23697-3-git-send-email-fred.konrad@greensocs.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC 02/10] use a different translation block list for each cpu. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: fred.konrad@greensocs.com, qemu-devel@nongnu.org, mttcg@greensocs.com Cc: peter.maydell@linaro.org, pbonzini@redhat.com, mark.burton@greensocs.com, agraf@suse.de, jan.kiszka@siemens.com On 01/16/2015 09:19 AM, fred.konrad@greensocs.com wrote: > @@ -759,7 +760,9 @@ static void page_flush_tb_1(int level, void **lp) > PageDesc *pd = *lp; > > for (i = 0; i < V_L2_SIZE; ++i) { > - pd[i].first_tb = NULL; > + for (j = 0; j < MAX_CPUS; j++) { > + pd[i].first_tb[j] = NULL; > + } > invalidate_page_bitmap(pd + i); > } > } else { Surely you've got to do some locking somewhere in order to be able to modify another thread's cpu tb list. I realize that we do have to solve this problem for x86, but for most other targets we ought, in principal, be able to avoid it. Which simply requires that we not treat icache flushes as nops. When the kernel has modified a page, like so, it will also have notified the other cpus that like so, if (smp_call_function(ipi_flush_icache_page, mm, 1)) { We ought to be able to leverage this to avoid some locking at the qemu level. r~