From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86396C433FE for ; Tue, 12 Apr 2022 19:41:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239312AbiDLTnW (ORCPT ); Tue, 12 Apr 2022 15:43:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358975AbiDLTly (ORCPT ); Tue, 12 Apr 2022 15:41:54 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0295213DEA for ; Tue, 12 Apr 2022 12:39:12 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id mp16-20020a17090b191000b001cb5efbcab6so4007544pjb.4 for ; Tue, 12 Apr 2022 12:39:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=euXbcICELCNFPb9NBiqFJCEuBEWWOkR4DndDT7gTpQQ=; b=MEUidB8DRj9fvi4cnMSZFKz23b/gyy6G5lF7tUbIfoRKu+hViqyxc12v1DaVxBFwy3 8UNesJ3m4RFrySu3SXyzfsaS2d6PyoWV92NsSrh6AD1FRQ/LBPMFFAv2YxzrQtFyB5OP C6kOzg3jV+cq9U+8RVcayv8E58sCsTQeJ0IQtZMoopC9B+2u/+L1CB4WROIWNxNf9DT7 E1p06+IL2OttQ5APRHPKAbPvh5VfCM//f6VTrP1s+8VmsQqdv+wslnoqX8mCoh5WGtbe 7j0lb+Sdyp6DHYJOmScCgaURKU0V+WJZ3CmgI+zYNUSaRWUMVnRtM8Z/QnTn999c6QBR nCHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=euXbcICELCNFPb9NBiqFJCEuBEWWOkR4DndDT7gTpQQ=; b=ifO1K0E4L8cJDqz5lu6raHfQpGG0gfrHJGhJbGQ4VrXlGYNxhKyD0TKcLP220WPXXZ Mi7SEHQpNvR1nVQw7kLFbamxAA30+6QnoKpMmr2hxGyDHGkbAKMXmpSOlkTBfwyaNEPw g00vJsEHzPRME5JvicM4rBG10w76g4YzsNJdaKGNaGp0qsSVUWCkQoEd83J/dv/NVdJ2 hIhidALmllBKF7ow3A5GLEeL7N6ClKSj20FkBQdFKyI9RayhfHGQ7urxkVKerYi9zNjp GHPX+V1m1jXHKqX+ookQCE7w0sosLz3As4+BfG9pM4zlVgqluwRAmt+CPqmqild/CQC1 BFJw== X-Gm-Message-State: AOAM530YraX+NvWNLuObMo1bGoLIokRImerNYZidtZ8YwA56oEz5l4xy A1/rJ5NK9DZ4sgNZga6OW5R1MA== X-Google-Smtp-Source: ABdhPJxObF/XqrzfeblL0RaWjUwlj5fpWsIjxCqmAsdJihfUXGpRFtkqISHwT6NUzBLoIX9tEZLeLw== X-Received: by 2002:a17:902:f702:b0:156:aaa8:7479 with SMTP id h2-20020a170902f70200b00156aaa87479mr39406836plo.161.1649792352219; Tue, 12 Apr 2022 12:39:12 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id w14-20020a63474e000000b0039cce486b9bsm3578406pgk.13.2022.04.12.12.39.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 12:39:11 -0700 (PDT) Date: Tue, 12 Apr 2022 19:39:07 +0000 From: Sean Christopherson To: Ben Gardon Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Peter Xu , David Matlack , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid Subject: Re: [PATCH v2 8/9] KVM: x86/mmu: Make kvm_is_mmio_pfn usable outside of spte.c Message-ID: References: <20220321224358.1305530-1-bgardon@google.com> <20220321224358.1305530-9-bgardon@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220321224358.1305530-9-bgardon@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 21, 2022, Ben Gardon wrote: > Export kvm_is_mmio_pfn from spte.c. It will be used in a subsequent > commit for in-place lpage promotion when disabling dirty logging. Rather than force the promotion path to call kvm_is_mmio_pfn(), what about: a. Truly exporting the helper, i.e. EXPORT_SYMBOL_GPL b. Move this patch earlier in the series, before "KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu" c. In the same patch, drop the "is_mmio" param from kvm_x86_ops.get_mt_mask() and have vmx_get_mt_mask() call it directly. That way the call to kvm_is_mmio_pfn() is avoided when running on AMD hosts (ignoring the shadow_me_mask thing, which I have a separate tweak for). The worst case scenario for a lookup is actually quite expensive, e.g. retpoline and a spinlock. > Signed-off-by: Ben Gardon > --- > arch/x86/kvm/mmu/spte.c | 2 +- > arch/x86/kvm/mmu/spte.h | 1 + > 2 files changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > index 45e9c0c3932e..8e9b827c4ed5 100644 > --- a/arch/x86/kvm/mmu/spte.c > +++ b/arch/x86/kvm/mmu/spte.c > @@ -69,7 +69,7 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) > return spte; > } > > -static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) > +bool kvm_is_mmio_pfn(kvm_pfn_t pfn) > { > if (pfn_valid(pfn)) > return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)) && > diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h > index cee02fe63429..e058a85e6c66 100644 > --- a/arch/x86/kvm/mmu/spte.h > +++ b/arch/x86/kvm/mmu/spte.h > @@ -443,4 +443,5 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn); > > void kvm_mmu_reset_all_pte_masks(void); > > +bool kvm_is_mmio_pfn(kvm_pfn_t pfn); > #endif > -- > 2.35.1.894.gb6a874cedc-goog >