From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2676AEB64D9 for ; Mon, 26 Jun 2023 20:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230231AbjFZUnV (ORCPT ); Mon, 26 Jun 2023 16:43:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbjFZUnU (ORCPT ); Mon, 26 Jun 2023 16:43:20 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45C4E1BC0 for ; Mon, 26 Jun 2023 13:42:42 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-262d8993033so802823a91.0 for ; Mon, 26 Jun 2023 13:42:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687812154; x=1690404154; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=roNwuphAhOKRAuXeRi7d1wmxywx48d/MTesOiCetVbg=; b=003hPm8k6NkDyLnmn8q7pqzAPTuLiKwFD/H+x8IJOvsA5No1rQQZVs76OoCgWM/lGV e40lpmvDP9S8v47UiCO+j01tJUQkYF7qIQpJoVvfHsTPiiR2i5d+EZcXZtJPGjs+38na OnibD67CD8VswKAE8fu6Vdb9w+eu7plm1dW0UKCgEJxKj80sndBMZHaPimQs/4AosyJ7 cwggDPnwnQR8So6SZh/LJIFgCuGaZFp5JnRKcDFbtx4biUxBjB9x9owq4GqhjKfM9Ljx dsqdV9Ea+S2vfwzd3/WUb/+VeS/6XpDrYWvKsD1jDgXK1qFyL/lvHMhEgQZYqcY+nI5A UuRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687812154; x=1690404154; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=roNwuphAhOKRAuXeRi7d1wmxywx48d/MTesOiCetVbg=; b=Jnyys9u8YmJIy7lDZSaG7FGLq0ccqdbnhTZYqymJw4geEUnHQwa+NgUfpLob2iZT95 6d6ZqLOz2w8tGo3khWZRzGI5KiagbKkGIo/rc5ar0o2KgSyDezFqNJdZVpfbgKeAPIKr 770AdxK32BXTSfzip3DPH3TXT3HlJsT4hNBCItbcKKkhfHLxLBY3i7fzpWa6sbxV/Pq3 WLxHy8GIyxqAwBQYo6iqH6vLnYDL9SkV8dAnPJcDU6+jNGgSpi4sT/d4ozvisyYGuDrV nvSVM020dclC5Tt7jqhphnvYQCo5/zpV2HOE0LVnic2qGD+4k0zmZAqAO0xqudGHTXQ0 WJbQ== X-Gm-Message-State: AC+VfDxkNW97wiNoe74UJOW6MixUJ+DOci93o76eAVJqzdvg88zKvoA0 uGXRaJ4Dq7z3vAZUGOs6P1Qqp9Y5iBw= X-Google-Smtp-Source: ACHHUZ5VFQ8ouxtbrP37Y8tQO/Qx83nWyjN/Y6ZCv2huHQVZaO2HLCH9aT4mxg5cjHlnkWSZQ5HKldWJm+g= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:400e:b0:262:ffa8:f49d with SMTP id ie14-20020a17090b400e00b00262ffa8f49dmr394485pjb.9.1687812154391; Mon, 26 Jun 2023 13:42:34 -0700 (PDT) Date: Mon, 26 Jun 2023 13:42:32 -0700 In-Reply-To: Mime-Version: 1.0 References: <20230605004334.1930091-1-mizhang@google.com> Message-ID: Subject: Re: [PATCH] KVM: x86/mmu: Remove KVM MMU write lock when accessing indirect_shadow_pages From: Sean Christopherson To: Jim Mattson Cc: Mingwei Zhang , Paolo Bonzini , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Mon, Jun 26, 2023, Jim Mattson wrote: > On Thu, Jun 15, 2023 at 4:58=E2=80=AFPM Mingwei Zhang wrote: > > > > On Tue, Jun 6, 2023 at 5:28=E2=80=AFPM Sean Christopherson wrote: > > > > > > On Tue, Jun 06, 2023, Mingwei Zhang wrote: > > > > > > Hmm. I agree with both points above, but below, the change seem= s too > > > > > > heavyweight. smp_wb() is a mfence(), i.e., serializing all > > > > > > loads/stores before the instruction. Doing that for every shado= w page > > > > > > creation and destruction seems a lot. > > > > > > > > > > No, the smp_*b() variants are just compiler barriers on x86. > > > > > > > > hmm, it is a "lock addl" now for smp_mb(). Check this: 450cbdd0125c > > > > ("locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE") > > > > > > > > So this means smp_mb() is not a free lunch and we need to be a litt= le > > > > bit careful. > > > > > > Oh, those sneaky macros. x86 #defines __smp_mb(), not the outer help= er. I'll > > > take a closer look before posting to see if there's a way to avoid th= e runtime > > > barrier. > > > > Checked again, I think using smp_wmb() and smp_rmb() should be fine as > > those are just compiler barriers. We don't need a full barrier here. >=20 > That seems adequate. Strictly speaking, no, because neither FNAME(fetch) nor kvm_mmu_pte_write()= are pure readers or writers. FNAME(fetch) reads guest memory (guest PTEs) and = writes indirect_shadow_pages. kvm_mmu_pte_write() writes guest memory (guest PTE= s) and reads indirect_shadow_pages (it later writes indirect_shadow_pages too, but= that write isn't relevant to the ordering we care about here).