From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F81FC3DA7F for ; Wed, 31 Jul 2024 08:11:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/6o5eOKMgYVhFfCfAX4KZTyTvIb4CeiZ7EcRemTMKgE=; b=p2LONShfw9wo/8KDTDy6+Q4pA3 yZmCBaEySXd7wYn5KA7+kdoPBh4pYHhx5WS+7ijv4BdHpTR8SlxYKbB/PLG08aMuzilC9WvvtVaB2 Af2+Ge74B1FM0Q4xTL//QeXZpa3JwDXosjcrbNwYSFWB/Mjwh8pDbGGgYYVspTbNDl1Ks27OnKeOx FtJy0gc3OggFKa82WcIKZVwOdsSgnqEL76geDdj8TsW6Ga3zhhniLVAuUzNLyhwCMS6MAmLbX6wjC Y/tp+eLpc0qxGlkHr4eAuNxMI1GWpz4LFy/A4IC9AG2y7bKT7n/yuTF4Fb/6Wk84GASszDuPM+Cn6 9wk3shOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZ4R0-00000000GP1-3ND6; Wed, 31 Jul 2024 08:11:46 +0000 Received: from mail-ed1-x52c.google.com ([2a00:1450:4864:20::52c]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZ4QX-00000000GHY-3zeH for linux-arm-kernel@lists.infradead.org; Wed, 31 Jul 2024 08:11:19 +0000 Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-59f9f59b827so8093517a12.1 for ; Wed, 31 Jul 2024 01:11:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1722413476; x=1723018276; darn=lists.infradead.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=/6o5eOKMgYVhFfCfAX4KZTyTvIb4CeiZ7EcRemTMKgE=; b=UKM1np1Xtr0UZbhstDAticSBESWpx0Ym7dKGddjLJZ0A+vaC1JVxlSXP6SpAYPE+ym iJSYGR6yrtBdUb496Sj7onw46+LSY8b63cwm9kr0Az2EKvWkb5KHMu8YoZ7uvO2bD+kG nrKr07g9TOCYlsz8N2j0hD9xo6mQeupHfqcU/091Ne6X9S/j3UGHGluDNU0xl2M6kqaJ C4YVz0bxI8sbj3iyLd8pXcHUSFkSYeO81xp8oozJ398uyomB049i3ZwwKomzfsJZSA8l 60Z7BRxhn3jZoN8kVzAe7kXQ/tJNGQ47TNy3g6RgntLck5WA5MiOJgKc3Ag1VxOoMirL tqIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722413476; x=1723018276; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=/6o5eOKMgYVhFfCfAX4KZTyTvIb4CeiZ7EcRemTMKgE=; b=EhMJecwqm4Lr3h5ctuVknUo54sHtyRWuAM0QRW/QBheeLq+d4wqNheDBGiY+oNr0/h oaqYcQxsL1r7Le70ccsyoJ7ErsuEq6pbsv1W4bQPL9OPx5oOlfVXGcvBvxMtAEgqmyfR bWwTswkw9SADthlgKSwyk4Qv8+MUkHOnqBpZ6N3PKjyvpq6Nz80ubjtv5VqCDowRcQ02 gBk8hFQgqmKjavY79h1jH/gjAsS5lOOGjixD4hDGU00SNXOOvNot8lmZNoCN59AmiF56 /89afcWLOY6/uIOna626EIMOJ5kPW/yRrGoaRbGRdkJaIz9VFAbYdXwTPNei0fh3N2LD IxcQ== X-Forwarded-Encrypted: i=1; AJvYcCVT6gspQI+oT9pUpQOR8BFLYgjyn/BN/hjgOWus7sf5IdPJ6S4zROQvTUEJMVIPnS7FVhoy1trs3+sYwovqQPbuQgoXi7WrgZeNguIRzEF1daijBqE= X-Gm-Message-State: AOJu0Yz1DR/62PxCzaobRZLubA7OCvKUAw0Cs8Vk1mAyafd2ZZ+iOmOC hN6ozDPlDobIsSVjwvhawGDNpSgdItpCDAE8uq38/JGkRw6GY4y3xyBrceudTCE= X-Google-Smtp-Source: AGHT+IF8KtMuBT/lx87J71F/lRrgIXThKYLKm0Q9zh/TR0QeaI4sQXL1DHSiINUT/9ZA+lMCzDpL5Q== X-Received: by 2002:a17:907:1b26:b0:a7a:af5d:f312 with SMTP id a640c23a62f3a-a7d4011442emr913154866b.46.1722413475678; Wed, 31 Jul 2024 01:11:15 -0700 (PDT) Received: from localhost (2001-1ae9-1c2-4c00-20f-c6b4-1e57-7965.ip6.tmcz.cz. [2001:1ae9:1c2:4c00:20f:c6b4:1e57:7965]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7acad414e2sm738525466b.127.2024.07.31.01.11.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jul 2024 01:11:15 -0700 (PDT) Date: Wed, 31 Jul 2024 10:11:14 +0200 From: Andrew Jones To: Sean Christopherson Cc: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Subject: Re: [PATCH v12 58/84] KVM: RISC-V: Use kvm_faultin_pfn() when mapping pfns into the guest Message-ID: <20240731-a5f8928d385945f049e5f96e@orel> References: <20240726235234.228822-1-seanjc@google.com> <20240726235234.228822-59-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240726235234.228822-59-seanjc@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240731_011118_015912_079D5294 X-CRM114-Status: GOOD ( 19.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jul 26, 2024 at 04:52:07PM GMT, Sean Christopherson wrote: > Convert RISC-V to __kvm_faultin_pfn()+kvm_release_faultin_page(), which > are new APIs to consolidate arch code and provide consistent behavior > across all KVM architectures. > > Signed-off-by: Sean Christopherson > --- > arch/riscv/kvm/mmu.c | 11 ++++------- > 1 file changed, 4 insertions(+), 7 deletions(-) > > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c > index 806f68e70642..f73d6a79a78c 100644 > --- a/arch/riscv/kvm/mmu.c > +++ b/arch/riscv/kvm/mmu.c > @@ -601,6 +601,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, > bool logging = (memslot->dirty_bitmap && > !(memslot->flags & KVM_MEM_READONLY)) ? true : false; > unsigned long vma_pagesize, mmu_seq; > + struct page *page; > > /* We need minimum second+third level pages */ > ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); > @@ -631,7 +632,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, > > /* > * Read mmu_invalidate_seq so that KVM can detect if the results of > - * vma_lookup() or gfn_to_pfn_prot() become stale priort to acquiring > + * vma_lookup() or __kvm_faultin_pfn() become stale priort to acquiring ^ while here could fix this typo > * kvm->mmu_lock. > * > * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs > @@ -647,7 +648,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, > return -EFAULT; > } > > - hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writable); > + hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page); > if (hfn == KVM_PFN_ERR_HWPOISON) { > send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, > vma_pageshift, current); > @@ -681,11 +682,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, > kvm_err("Failed to map in G-stage\n"); > > out_unlock: > - if ((!ret || ret == -EEXIST) && writable) > - kvm_set_pfn_dirty(hfn); > - else > - kvm_release_pfn_clean(hfn); > - > + kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable); > spin_unlock(&kvm->mmu_lock); > return ret; > } > -- > 2.46.0.rc1.232.g9752f9e123-goog > > Reviewed-by: Andrew Jones