From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3032C433EF for ; Wed, 29 Sep 2021 04:32:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4028611BD for ; Wed, 29 Sep 2021 04:32:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A4028611BD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xInipmnkyn1n5oJ0z+TSX0Bsm7bFdQgsLIpOWcgV/QI=; b=klPM4QznVobHYL ncstN46N/RNGCB0GRkUCx1ytdnSuDf6EDItU8l18f0jrNNUk3zLWsYFAKN7cluRwZql52mXJuQW80 88WoSeKhdfCppumMkkeMpQcqgw3Z4KCuYaeEZ855m9ztaa7Dkygz9CvJhkn1W4gkXa+L06ZJRaVgt NaByRV3rLoXTnAk7LNaDtxK/+Ctplh9xUHk9uSn6wXU4ZL5xpLrL05KCCw/7f/zC9iogTgjeSMsa+ dgJzssinL+rfVafPY+h5txFolxemhCIhItfySKM+msOH4CRgNXNddGKS4GJdhN8dJbMWWHCg4svhL oxSZe2WAf8OQ/EPcLd6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mVRED-009xBu-Ps; Wed, 29 Sep 2021 04:29:58 +0000 Received: from mail-pl1-x635.google.com ([2607:f8b0:4864:20::635]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mVRDy-009x9Y-3F for linux-arm-kernel@lists.infradead.org; Wed, 29 Sep 2021 04:29:44 +0000 Received: by mail-pl1-x635.google.com with SMTP id b22so654572pls.1 for ; Tue, 28 Sep 2021 21:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hv8jaPDpXo2lckUcZI4SGXI7WI1VKOIRItczgPJrrdM=; b=FJ98v0iBVwAGgeYEO+zp5zVziLN+K2Y798In5ZKFK2IpK6jU0gfIxGGvq844h6b5z/ NTITVLamltvwViaR5m2RCo0HWjDVpgbYFJ/o6yM+l9dkEqeUkozur9EzIRR6aVsABAZm kYl/TDBS7ih7n3o1GdmJcLck4FGP++aLVNiPw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hv8jaPDpXo2lckUcZI4SGXI7WI1VKOIRItczgPJrrdM=; b=uIRLZH+6Q9qI2xBsXwFtSdLsomOc34nlpOc/BySLLZva5tpOTTq6KRJR9dl/ItWSZE PMUQUhxpfMLl88hcfQ1Bxhn/uVyCcLe6CXIKc9IBT8Vyid/iCZfFia5JZ1kTBF4HEhXI OtBUjIk4FLw84DsglBuI6pZGt62b+9ZEfipG20Ain7nRUfKgixTckjwy79buXUSr0Fy0 Ut0sIs40SGHWTKnbZDC9fZLv0Tef8fLJ4sQyx5D8Lo+cE+gFIvtn11oWtiiil50B8rN9 nbfYDzScKAomn5qJxXkkVatxQcJ8pduRIH09rahozf55EDlMEuksuKSvO4grm6NFkfvr rPIQ== X-Gm-Message-State: AOAM531kI1M/i1rxQfKUOJdFwaLifwuJWjV/W+g1B6rxJ0Tjx6En99y5 MmgHlA13cA+EbOEpTRQ33c+MVw== X-Google-Smtp-Source: ABdhPJwEkMqbbAu/1cu2nZEBZSb2/vV8Oo5ni2OXhrCUgdedZyhnhVtWf3t2lQrd1OeIKadK/ahPdQ== X-Received: by 2002:a17:90a:514b:: with SMTP id k11mr3866407pjm.103.1632889781346; Tue, 28 Sep 2021 21:29:41 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:f818:368:93ef:fa36]) by smtp.gmail.com with UTF8SMTPSA id gk14sm269627pjb.35.2021.09.28.21.29.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Sep 2021 21:29:41 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v4 2/4] KVM: x86/mmu: use gfn_to_pfn_page Date: Wed, 29 Sep 2021 13:29:06 +0900 Message-Id: <20210929042908.1313874-3-stevensd@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog In-Reply-To: <20210929042908.1313874-1-stevensd@google.com> References: <20210929042908.1313874-1-stevensd@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210928_212942_174992_569FA2B6 X-CRM114-Status: GOOD ( 18.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Stevens Covert usages of the deprecated gfn_to_pfn functions to the new gfn_to_pfn_page functions. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++++------------- arch/x86/kvm/mmu/mmu_internal.h | 3 ++- arch/x86/kvm/mmu/paging_tmpl.h | 23 +++++++++++------- arch/x86/kvm/mmu/tdp_mmu.c | 6 ++--- arch/x86/kvm/mmu/tdp_mmu.h | 4 +-- arch/x86/kvm/x86.c | 6 +++-- 6 files changed, 51 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2d7e61122af8..5a1adcc9cfbc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2783,8 +2783,9 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, return ret; } -static kvm_pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, - bool no_dirty_log) +static kvm_pfn_t pte_prefetch_gfn_to_pfn_page(struct kvm_vcpu *vcpu, + gfn_t gfn, bool no_dirty_log, + struct page **page) { struct kvm_memory_slot *slot; @@ -2792,7 +2793,7 @@ static kvm_pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, if (!slot) return KVM_PFN_ERR_FAULT; - return gfn_to_pfn_memslot_atomic(slot, gfn); + return gfn_to_pfn_page_memslot_atomic(slot, gfn, page); } static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, @@ -2923,7 +2924,8 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, int max_level, kvm_pfn_t *pfnp, - bool huge_page_disallowed, int *req_level) + struct page *page, bool huge_page_disallowed, + int *req_level) { struct kvm_memory_slot *slot; kvm_pfn_t pfn = *pfnp; @@ -2935,6 +2937,9 @@ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, if (unlikely(max_level == PG_LEVEL_4K)) return PG_LEVEL_4K; + if (!page) + return PG_LEVEL_4K; + if (is_error_noslot_pfn(pfn) || kvm_is_reserved_pfn(pfn)) return PG_LEVEL_4K; @@ -2984,7 +2989,8 @@ void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, } static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, - int map_writable, int max_level, kvm_pfn_t pfn, + int map_writable, int max_level, + kvm_pfn_t pfn, struct page *page, bool prefault, bool is_tdp) { bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); @@ -2997,7 +3003,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, gfn_t gfn = gpa >> PAGE_SHIFT; gfn_t base_gfn = gfn; - level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, page, huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(gpa, level, pfn); @@ -3901,8 +3907,9 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, } static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, - gpa_t cr2_or_gpa, kvm_pfn_t *pfn, hva_t *hva, - bool write, bool *writable, int *r) + gpa_t cr2_or_gpa, kvm_pfn_t *pfn, + hva_t *hva, bool write, bool *writable, + struct page **page, int *r) { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); bool async; @@ -3936,8 +3943,8 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, } async = false; - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, - write, writable, hva); + *pfn = __gfn_to_pfn_page_memslot(slot, gfn, false, &async, + write, writable, hva, page); if (!async) return false; /* *pfn has correct page already */ @@ -3951,8 +3958,8 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, goto out_retry; } - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, - write, writable, hva); + *pfn = __gfn_to_pfn_page_memslot(slot, gfn, false, NULL, + write, writable, hva, page); out_retry: *r = RET_PF_RETRY; @@ -3969,6 +3976,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, gfn_t gfn = gpa >> PAGE_SHIFT; unsigned long mmu_seq; kvm_pfn_t pfn; + struct page *page; hva_t hva; int r; @@ -3987,7 +3995,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, smp_rmb(); if (kvm_faultin_pfn(vcpu, prefault, gfn, gpa, &pfn, &hva, - write, &map_writable, &r)) + write, &map_writable, &page, &r)) return r; if (handle_abnormal_pfn(vcpu, is_tdp ? 0 : gpa, gfn, pfn, ACC_ALL, &r)) @@ -4008,17 +4016,18 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, if (is_tdp_mmu_fault) r = kvm_tdp_mmu_map(vcpu, gpa, error_code, map_writable, max_level, - pfn, prefault); + pfn, page, prefault); else - r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn, - prefault, is_tdp); + r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, + pfn, page, prefault, is_tdp); out_unlock: if (is_tdp_mmu_fault) read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); return r; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index bf2bdbf333c2..08762a2d8d31 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -165,7 +165,8 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, int max_level); int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, int max_level, kvm_pfn_t *pfnp, - bool huge_page_disallowed, int *req_level); + struct page *page, bool huge_page_disallowed, + int *req_level); void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, kvm_pfn_t *pfnp, int *goal_levelp); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7d03e9b7ccfa..4a61b96fcdef 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -564,6 +564,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned pte_access; gfn_t gfn; kvm_pfn_t pfn; + struct page *page; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; @@ -573,8 +574,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn = gpte_to_gfn(gpte); pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, - no_dirty_log && (pte_access & ACC_WRITE_MASK)); + pfn = pte_prefetch_gfn_to_pfn_page(vcpu, gfn, + no_dirty_log && (pte_access & ACC_WRITE_MASK), &page); if (is_error_pfn(pfn)) return false; @@ -585,7 +586,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, mmu_set_spte(vcpu, spte, pte_access, false, PG_LEVEL_4K, gfn, pfn, true, true); - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); return true; } @@ -665,8 +667,8 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, */ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, struct guest_walker *gw, u32 error_code, - int max_level, kvm_pfn_t pfn, bool map_writable, - bool prefault) + int max_level, kvm_pfn_t pfn, struct page *page, + bool map_writable, bool prefault) { bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); bool write_fault = error_code & PFERR_WRITE_MASK; @@ -723,7 +725,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, } level = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, - huge_page_disallowed, &req_level); + page, huge_page_disallowed, + &req_level); trace_kvm_mmu_spte_requested(addr, gw->level, pfn); @@ -830,6 +833,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, struct guest_walker walker; int r; kvm_pfn_t pfn; + struct page *page; hva_t hva; unsigned long mmu_seq; bool map_writable, is_self_change_mapping; @@ -882,7 +886,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, smp_rmb(); if (kvm_faultin_pfn(vcpu, prefault, walker.gfn, addr, &pfn, &hva, - write_fault, &map_writable, &r)) + write_fault, &map_writable, &page, &r)) return r; if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r)) @@ -916,13 +920,14 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, r = make_mmu_pages_available(vcpu); if (r) goto out_unlock; - r = FNAME(fetch)(vcpu, addr, &walker, error_code, max_level, pfn, + r = FNAME(fetch)(vcpu, addr, &walker, error_code, max_level, pfn, page, map_writable, prefault); kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT); out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); return r; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 64ccfc1fa553..dec735081c5e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -987,8 +987,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write, * page tables and SPTEs to translate the faulting guest physical address. */ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, - int map_writable, int max_level, kvm_pfn_t pfn, - bool prefault) + int map_writable, int max_level, + kvm_pfn_t pfn, struct page *page, bool prefault) { bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); bool write = error_code & PFERR_WRITE_MASK; @@ -1004,7 +1004,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int level; int req_level; - level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, page, huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(gpa, level, pfn); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 358f447d4012..75526618601a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -49,8 +49,8 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm); void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm); int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, - int map_writable, int max_level, kvm_pfn_t pfn, - bool prefault); + int map_writable, int max_level, + kvm_pfn_t pfn, struct page *page, bool prefault); bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, bool flush); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 28ef14155726..6e9b9af9c687 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7527,6 +7527,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, { gpa_t gpa = cr2_or_gpa; kvm_pfn_t pfn; + struct page *page; if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -7556,7 +7557,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + pfn = gfn_to_pfn_page(vcpu->kvm, gpa_to_gfn(gpa), &page); /* * If the instruction failed on the error pfn, it can not be fixed, @@ -7565,7 +7566,8 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->direct_map) { -- 2.33.0.685.g46640cef36-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel