From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DDB02C27C40 for ; Thu, 24 Aug 2023 08:05:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=f7FT1yJklBlzZkESfF0xn8ui8quEZ0Sh3aZoWTbxiUo=; b=vg+RDt/7pZBNzw KSlqkK3q+kjx03O1aZ4CtiNGJ7pdePUSG2FS4KrSBAK4dDu5i3C3tvYCn9i1/oBInyDDFmM76Mkdr 4pCOvK2imjRYMypRBfpqBFlbvxj35O1iwVWlRWjhuuGA7VZcHp2uGRvggVPTiR97mblNfRIOkeFvH hgtNbwqfnO+DKJQ2PtNxvVPviyS+2BAzVG0YNOXZ+QIy7bCuNc+aKMYmXn+ULVfgJ1t8F6d6SXJ+V R5kBNG4EF72RwVEx3y0ORu/+FVP+rkjDZAdHG4xVWkQ3t4xeOm1N089eoD7y9fsfQWlHdM/xpzPNv UbrX+aFI+R4a28Y/UqRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZ5LE-002V98-2e; Thu, 24 Aug 2023 08:05:20 +0000 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qZ5L3-002V35-2J for linux-arm-kernel@lists.infradead.org; Thu, 24 Aug 2023 08:05:11 +0000 Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1bdca7cc28dso51550135ad.1 for ; Thu, 24 Aug 2023 01:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1692864308; x=1693469108; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZhPhAEYXOjd7Sn6E5mFHIsVfVenAXGYibiKvIzNeJhI=; b=G+ioXOlMhzzSsFeOo98ApC0O6Bz+xTN7o7NvHt6zUSzMtMjnoOCwssUrtpM3Ea/f4q QzZWnac70CX9Tsu8jkszmzYyq03LndVS91OTCpw8PH2TyoJt/jfWUy1VBPEHCSH5NHQI lVh4ksRnv/2AHsWJlta0CGXZwJ81ZZyP1PIsU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692864308; x=1693469108; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZhPhAEYXOjd7Sn6E5mFHIsVfVenAXGYibiKvIzNeJhI=; b=GKBL1rDeyNSuFYtbsBuQjZNKRs47/DstosizaBq2Yac8FK681p0zyEex5l/+DNiN3v XZb9UHvrpzk2d6ehhsdS4pz2x57CNxiqhxHp6QnJe+4Ag3YK2sqV3MFiA+YLpeejFeul K4seTPg+fE3a5UutU+WgizdI9PvxbzSs6p1c1JrOjuhWBUCYqrHfB5rdvcNlPpBEadt2 2gHem5ScW2LghOmcS1xn6hKGPbV4+R/6oVbRhMrfKn2+h5Sz4VHH6HSYpLRz/oOIXMPT Z3pAvkRUe84eUwr/k6WoO1w538ssFod94/FM4A2vy37PEOKuvYw4E3IepjQPASnXZmvJ KS/g== X-Gm-Message-State: AOJu0YwkZRYZ+jMP5R2FfmekZdkJ8Z4aU0H382ZDOyXbTrvQX6nkgiJe bVt/3Oy9NTdfDUtQxa1Q9A2h3mCYfHqG3b536A0= X-Google-Smtp-Source: AGHT+IFkCGH2GwA3ilX5jB05HUrrf1vUBIbmMJaFJx2swg78OUGHa276ImTXLpzo2mY30kNcsT6wbw== X-Received: by 2002:a17:902:c947:b0:1c0:7bac:13d4 with SMTP id i7-20020a170902c94700b001c07bac13d4mr10820347pla.65.1692864308600; Thu, 24 Aug 2023 01:05:08 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:515:8b2a:90c3:b79e]) by smtp.gmail.com with UTF8SMTPSA id q6-20020a170902a3c600b001bf095dfb76sm12370611plb.237.2023.08.24.01.05.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 24 Aug 2023 01:05:08 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v8 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn Date: Thu, 24 Aug 2023 17:04:04 +0900 Message-ID: <20230824080408.2933205-5-stevensd@google.com> X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog In-Reply-To: <20230824080408.2933205-1-stevensd@google.com> References: <20230824080408.2933205-1-stevensd@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230824_010509_752226_E17B9DE5 X-CRM114-Status: GOOD ( 17.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Stevens Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. Most arguments directly map to the new API. The largest change is replacing the async in/out parameter with FOLL_NOWAIT parameter and the KVM_PFN_ERR_NEEDS_IO return value. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 41 +++++++++++++++++++++++++++++++---------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce..dabae67f198b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; - bool async; + struct kvm_follow_pfn foll = { + .slot = slot, + .gfn = fault->gfn, + .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .try_map_writable = true, + }; /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4325,12 +4330,20 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_EMULATE; } - async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, - fault->write, &fault->map_writable, - &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + foll.flags |= FOLL_NOWAIT; + fault->pfn = __kvm_follow_pfn(&foll); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + /* + * If __kvm_follow_pfn() failed because I/O is needed to fault in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry __kvm_follow_pfn() with + * I/O allowed. All other failures are fatal, i.e. retrying won't help. + */ + if (fault->pfn != KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4348,9 +4361,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + foll.flags |= FOLL_INTERRUPTIBLE; + foll.flags &= ~FOLL_NOWAIT; + fault->pfn = __kvm_follow_pfn(&foll); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + return RET_PF_CONTINUE; +success: + fault->hva = foll.hva; + fault->map_writable = foll.writable; return RET_PF_CONTINUE; } -- 2.42.0.rc1.204.g551eb34607-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel