From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41CC6C25B0C for ; Tue, 9 Aug 2022 14:50:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233792AbiHIOuF (ORCPT ); Tue, 9 Aug 2022 10:50:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243158AbiHIOt7 (ORCPT ); Tue, 9 Aug 2022 10:49:59 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF3B91B7B4 for ; Tue, 9 Aug 2022 07:49:58 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id g12so11002629pfb.3 for ; Tue, 09 Aug 2022 07:49:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=pj/fEy+PbSOJfjnAmjWjp05xip889MBZsAYo8a+A09M=; b=CXaj2mo3edpOrrs/eGAixM+dbKdbKNrdKswMb3QFsqibFiHrd0zQLlivVeSMpb5NBU 0OVZyUfeaiDip8FhxSvA0JN1Yzd+S+tyOctSvkpgSfu9Ds7e2w75/lLCNX/2raMU34uI 9iH4h045wCfCNy4t6/RaDbw/ro1zUMhC0iRlwYXF0GmSlqXE5Hs3aQmq/roM38MjBGvg lbtcc7Y5VCBykGHk4du/nAWcH8yMISZwnCRlXN+FO2rKv8mXxe4EemIGzpExOQI7MV51 vUHtL/hcOUP4J4f33kPHXkvlSCsfhWSS1oXRy4/lXEBtkXwB28Yg13qi1O8dsEAgP+34 g1sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=pj/fEy+PbSOJfjnAmjWjp05xip889MBZsAYo8a+A09M=; b=PN7Fs7VKevKkfBMpOdo5q82pj8f2lHvZCiq20Yy8YAJ02XBKcBaw35nROzni5BvMsE ThV7hLGVIxJV5yxyyc+E+dDudd47aauN+PcX6PnGcKm6nwom0DdOSgkVcEi0QPx1H2o4 IJ86xes3WIQeRBsoFcYsrajIjK5LuShL8QcrBuY2HOFkIQ/djWIYF9Jvx2hw3Smyi0JM lbSuBnx2ZXBsrh8lVHtkWAZ0e8Fx/D7MsjQhdhCVuG/868Gu4Gb4ejCuLRQ3y50wn80z Ph4sn18ZNcvA28ghTZBjdmwDWfxTrYwewFiDLBUwSTBTvx34LhzQP79zs84hbE2vHTGd GUyg== X-Gm-Message-State: ACgBeo1CBh7zWwFvNszpqq5HtAXVwiLtjzAcFET8i+QWAWusV8gsf9y7 ezOJVcDMOalutvZGDqbIhfC92A== X-Google-Smtp-Source: AA6agR5raAJf/FGY+7K6wm6kyAjMcdb6Lw1pH0tvolG9dm1dLZhHNnUyqda3L+Izobzyg/WjAqaY3w== X-Received: by 2002:a63:4d1a:0:b0:41b:d319:d8ad with SMTP id a26-20020a634d1a000000b0041bd319d8admr19737406pgb.613.1660056598253; Tue, 09 Aug 2022 07:49:58 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id jb9-20020a170903258900b001709e3c755fsm5596792plb.230.2022.08.09.07.49.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Aug 2022 07:49:57 -0700 (PDT) Date: Tue, 9 Aug 2022 14:49:54 +0000 From: Sean Christopherson To: Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Yan Zhao , Mingwei Zhang , Ben Gardon Subject: Re: [PATCH v3 8/8] KVM: x86/mmu: explicitly check nx_hugepage in disallowed_hugepage_adjust() Message-ID: References: <20220805230513.148869-1-seanjc@google.com> <20220805230513.148869-9-seanjc@google.com> <36634375-e7ee-e28e-20dd-9ab1ebdd8040@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <36634375-e7ee-e28e-20dd-9ab1ebdd8040@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 09, 2022, Paolo Bonzini wrote: > On 8/6/22 01:05, Sean Christopherson wrote: > > !is_large_pte(spte)) { > > + u64 page_mask; > > + > > + /* > > + * Ensure nx_huge_page_disallowed is read after checking for a > > + * present shadow page. A different vCPU may be concurrently > > + * installing the shadow page if mmu_lock is held for read. > > + * Pairs with the smp_wmb() in kvm_tdp_mmu_map(). > > + */ > > + smp_rmb(); > > + > > + if (!spte_to_child_sp(spte)->nx_huge_page_disallowed) > > + return; > > + > > I wonder if the barrier shouldn't be simply in to_shadow_page(), i.e. always > assume in the TDP MMU code that sp->xyz is read after the SPTE that points > to that struct kvm_mmu_page. If we can get away with it, I'd prefer to rely on the READ_ONCE() in kvm_tdp_mmu_read_spte() and required ordering of: READ_ONCE() => PRESENT => spte_to_child_sp()