kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
@ 2025-06-11  9:51 zhouquan
  2025-06-11 11:29 ` Andrew Jones
  2025-07-17 12:03 ` Anup Patel
  0 siblings, 2 replies; 9+ messages in thread
From: zhouquan @ 2025-06-11  9:51 UTC (permalink / raw)
  To: anup, ajones, atishp, paul.walmsley, palmer
  Cc: linux-kernel, linux-riscv, kvm, kvm-riscv, Quan Zhou

From: Quan Zhou <zhouquan@iscas.ac.cn>

The caller has already passed in the memslot, and there are
two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving
the memslot again in `kvm_riscv_gstage_map`, we can replace them
with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`.

Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn>
---
 arch/riscv/kvm/mmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 1087ea74567b..f9059dac3ba3 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -648,7 +648,8 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 		return -EFAULT;
 	}
 
-	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
+	hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
+				&writable, &page);
 	if (hfn == KVM_PFN_ERR_HWPOISON) {
 		send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
 				vma_pageshift, current);
@@ -670,7 +671,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 		goto out_unlock;
 
 	if (writable) {
-		mark_page_dirty(kvm, gfn);
+		mark_page_dirty_in_slot(kvm, memslot, gfn);
 		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
 				      vma_pagesize, false, true);
 	} else {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-11  9:51 [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map() zhouquan
@ 2025-06-11 11:29 ` Andrew Jones
  2025-06-11 16:17   ` Sean Christopherson
  2025-07-17 12:03 ` Anup Patel
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Jones @ 2025-06-11 11:29 UTC (permalink / raw)
  To: zhouquan
  Cc: anup, atishp, paul.walmsley, palmer, linux-kernel, linux-riscv,
	kvm, kvm-riscv

On Wed, Jun 11, 2025 at 05:51:40PM +0800, zhouquan@iscas.ac.cn wrote:
> From: Quan Zhou <zhouquan@iscas.ac.cn>
> 
> The caller has already passed in the memslot, and there are
> two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving
> the memslot again in `kvm_riscv_gstage_map`, we can replace them
> with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`.
> 
> Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn>
> ---
>  arch/riscv/kvm/mmu.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 1087ea74567b..f9059dac3ba3 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -648,7 +648,8 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  		return -EFAULT;
>  	}
>  
> -	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
> +	hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
> +				&writable, &page);

I think introducing another function with the following diff would be
better than duplicating the is_write to foll translation.

Thanks,
drew

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 291d49b9bf05..6c80ad5c7e89 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1288,12 +1288,20 @@ kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn,
                            unsigned int foll, bool *writable,
                            struct page **refcounted_page);

+static inline kvm_pfn_t kvm_faultin_pfn_in_slot(const struct kvm_memory_slot *slot,
+                                               gfn_t gfn,
+                                               unsigned int foll, bool *writable,
+                                               struct page **refcounted_page)
+{
+       return __kvm_faultin_pfn(slot, gfn, write ? FOLL_WRITE : 0, writable, refcounted_page);
+}
+
 static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
                                        bool write, bool *writable,
                                        struct page **refcounted_page)
 {
-       return __kvm_faultin_pfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn,
-                                write ? FOLL_WRITE : 0, writable, refcounted_page);
+       return kvm_faultin_pfn_in_slot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn,
+                                      write, writable, refcounted_page);
 }

>  	if (hfn == KVM_PFN_ERR_HWPOISON) {
>  		send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
>  				vma_pageshift, current);
> @@ -670,7 +671,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  		goto out_unlock;
>  
>  	if (writable) {
> -		mark_page_dirty(kvm, gfn);
> +		mark_page_dirty_in_slot(kvm, memslot, gfn);
>  		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
>  				      vma_pagesize, false, true);
>  	} else {
> -- 
> 2.34.1
> 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-11 11:29 ` Andrew Jones
@ 2025-06-11 16:17   ` Sean Christopherson
  2025-06-12  9:42     ` Andrew Jones
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-06-11 16:17 UTC (permalink / raw)
  To: Andrew Jones
  Cc: zhouquan, anup, atishp, paul.walmsley, palmer, linux-kernel,
	linux-riscv, kvm, kvm-riscv

On Wed, Jun 11, 2025, Andrew Jones wrote:
> On Wed, Jun 11, 2025 at 05:51:40PM +0800, zhouquan@iscas.ac.cn wrote:
> > From: Quan Zhou <zhouquan@iscas.ac.cn>
> > 
> > The caller has already passed in the memslot, and there are
> > two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving
> > the memslot again in `kvm_riscv_gstage_map`, we can replace them
> > with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`.
> > 
> > Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn>
> > ---
> >  arch/riscv/kvm/mmu.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index 1087ea74567b..f9059dac3ba3 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -648,7 +648,8 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> >  		return -EFAULT;
> >  	}
> >  
> > -	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
> > +	hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
> > +				&writable, &page);
> 
> I think introducing another function with the following diff would be
> better than duplicating the is_write to foll translation.

NAK, I don't want an explosion of wrapper APIs (especially with boolean params).

I 100% agree that it's mildly annoying to force arch code to do convert "write"
to FOLL_WRITE, but that's a symptom of KVM not providing a common structure for
passing page fault information.

What I want to get to is a set of APIs that look something the below (very off
the cuff), not add more wrappers and put KVM back in a situation where there are
a bajillion ways to do the same basic thing.

struct kvm_page_fault {
	const gpa_t addr;
	const bool exec;
	const bool write;
	const bool present;

	gfn_t gfn;

	/* The memslot containing gfn. May be NULL. */
	struct kvm_memory_slot *slot;

	/* Outputs */
	unsigned long mmu_seq;
	kvm_pfn_t pfn;
	struct page *refcounted_page;
	bool map_writable;
};

kvm_pfn_t __kvm_faultin_pfn(struct kvm_page_fault *fault, unsigned int flags)
{
	struct kvm_follow_pfn kfp = {
		.slot = fault->slot,
		.gfn = fault->gfn,
		.flags = flags | fault->write ? FOLL_WRITE : 0,
		.map_writable = &fault->writable,
		.refcounted_page = &fault->refcounted_page,
	};

	fault->writable = false;
	fault->refcounted_page = NULL;

	return kvm_follow_pfn(&kfp);
}
EXPORT_SYMBOL_GPL(__kvm_faultin_pfn);

kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool write,
			  bool *writable, struct page **refcounted_page)
{
	struct kvm_follow_pfn kfp = {
		.slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn),,
		.gfn = gfn,
		.flags = write ? FOLL_WRITE : 0,
		.map_writable = writable,
		.refcounted_page = refcounted_page,
	};

	if (WARN_ON_ONCE(!writable || !refcounted_page))
		return KVM_PFN_ERR_FAULT;

	*writable = false;
	*refcounted_page = NULL;

	return kvm_follow_pfn(&kfp);
}
EXPORT_SYMBOL_GPL(__kvm_faultin_pfn);


To get things started, I proposed moving "struct kvm_page_fault" to common code
so that it can be shared by x86 and arm64 as part of the KVM userfault series.
But I'd be more than happy to acclerate the standardization of "struct kvm_page_fault"
if we want to get there sooner than later.

[*] https://lore.kernel.org/all/aBqlkz1bqhu-9toV@google.com

In the meantime, RISC-V can start preparing for that future, and clean up its
code in the process.

E.g. "fault_addr" should be "gpa_t", not "unsigned long".  If 32-bit RISC-V
is strictly limited to 32-bit _physical_ addresses in the *architecture*, then
gpa_t should probably be tweaked accordingly.

And vma_pageshift should be "unsigned int", not "short".

Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
and will generate what's probably a spurious kvm_err() message.

E.g. in the short term:

---
 arch/riscv/include/asm/kvm_host.h |  5 ++--
 arch/riscv/kvm/mmu.c              | 49 +++++++++++++++++++++----------
 arch/riscv/kvm/vcpu_exit.c        | 40 +------------------------
 3 files changed, 36 insertions(+), 58 deletions(-)

diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 85cfebc32e4c..84c5db715ba5 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -361,9 +361,8 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
 			     bool writable, bool in_atomic);
 void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
 			      unsigned long size);
-int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
-			 struct kvm_memory_slot *memslot,
-			 gpa_t gpa, unsigned long hva, bool is_write);
+int kvm_riscv_gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				struct kvm_cpu_trap *trap);
 int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm);
 void kvm_riscv_gstage_free_pgd(struct kvm *kvm);
 void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu);
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 1087ea74567b..3b0afc1c0832 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -586,22 +586,37 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 	return pte_young(ptep_get(ptep));
 }
 
-int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
-			 struct kvm_memory_slot *memslot,
-			 gpa_t gpa, unsigned long hva, bool is_write)
+int kvm_riscv_gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				struct kvm_cpu_trap *trap)
 {
-	int ret;
-	kvm_pfn_t hfn;
-	bool writable;
-	short vma_pageshift;
+
+	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
+	gpa_t gpa = (trap->htval << 2) | (trap->stval & 0x3);
 	gfn_t gfn = gpa >> PAGE_SHIFT;
-	struct vm_area_struct *vma;
+	struct kvm_memory_slot *memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+	bool logging = memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY);
+	bool write = trap->scause == EXC_STORE_GUEST_PAGE_FAULT;
+	bool read =  trap->scause == EXC_LOAD_GUEST_PAGE_FAULT;
+	unsigned int flags = write ? FOLL_WRITE : 0;
+	unsigned long hva, vma_pagesize, mmu_seq;
 	struct kvm *kvm = vcpu->kvm;
-	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
-	bool logging = (memslot->dirty_bitmap &&
-			!(memslot->flags & KVM_MEM_READONLY)) ? true : false;
-	unsigned long vma_pagesize, mmu_seq;
+	unsigned int vma_pageshift;
+	struct vm_area_struct *vma;
 	struct page *page;
+	kvm_pfn_t hfn;
+	bool writable;
+	int ret;
+
+	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
+	if (kvm_is_error_hva(hva) || (write && !writable)) {
+		if (read)
+			return kvm_riscv_vcpu_mmio_load(vcpu, run, gpa,
+							trap->htinst);
+		if (write)
+			return kvm_riscv_vcpu_mmio_store(vcpu, run, gpa,
+							 trap->htinst);
+		return -EOPNOTSUPP;
+	}
 
 	/* We need minimum second+third level pages */
 	ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels);
@@ -648,7 +663,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 		return -EFAULT;
 	}
 
-	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
+	hfn = __kvm_faultin_pfn(memslot, gfn, flags, &writable, &page);
 	if (hfn == KVM_PFN_ERR_HWPOISON) {
 		send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
 				vma_pageshift, current);
@@ -661,7 +676,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 	 * If logging is active then we allow writable pages only
 	 * for write faults.
 	 */
-	if (logging && !is_write)
+	if (logging && !write)
 		writable = false;
 
 	spin_lock(&kvm->mmu_lock);
@@ -677,14 +692,16 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
 				      vma_pagesize, true, true);
 	}
+	if (ret == -EEXIST)
+		ret = 0;
 
 	if (ret)
 		kvm_err("Failed to map in G-stage\n");
 
 out_unlock:
-	kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable);
+	kvm_release_faultin_page(kvm, page, ret, writable);
 	spin_unlock(&kvm->mmu_lock);
-	return ret;
+	return ret ? ret : 1;
 }
 
 int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm)
diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index 6e0c18412795..6f07077068f6 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -10,44 +10,6 @@
 #include <asm/csr.h>
 #include <asm/insn-def.h>
 
-static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
-			     struct kvm_cpu_trap *trap)
-{
-	struct kvm_memory_slot *memslot;
-	unsigned long hva, fault_addr;
-	bool writable;
-	gfn_t gfn;
-	int ret;
-
-	fault_addr = (trap->htval << 2) | (trap->stval & 0x3);
-	gfn = fault_addr >> PAGE_SHIFT;
-	memslot = gfn_to_memslot(vcpu->kvm, gfn);
-	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
-
-	if (kvm_is_error_hva(hva) ||
-	    (trap->scause == EXC_STORE_GUEST_PAGE_FAULT && !writable)) {
-		switch (trap->scause) {
-		case EXC_LOAD_GUEST_PAGE_FAULT:
-			return kvm_riscv_vcpu_mmio_load(vcpu, run,
-							fault_addr,
-							trap->htinst);
-		case EXC_STORE_GUEST_PAGE_FAULT:
-			return kvm_riscv_vcpu_mmio_store(vcpu, run,
-							 fault_addr,
-							 trap->htinst);
-		default:
-			return -EOPNOTSUPP;
-		};
-	}
-
-	ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva,
-		(trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false);
-	if (ret < 0)
-		return ret;
-
-	return 1;
-}
-
 /**
  * kvm_riscv_vcpu_unpriv_read -- Read machine word from Guest memory
  *
@@ -229,7 +191,7 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	case EXC_LOAD_GUEST_PAGE_FAULT:
 	case EXC_STORE_GUEST_PAGE_FAULT:
 		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
-			ret = gstage_page_fault(vcpu, run, trap);
+			ret = kvm_riscv_gstage_page_fault(vcpu, run, trap);
 		break;
 	case EXC_SUPERVISOR_SYSCALL:
 		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)

base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
-- 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-11 16:17   ` Sean Christopherson
@ 2025-06-12  9:42     ` Andrew Jones
  2025-06-13 22:29       ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Jones @ 2025-06-12  9:42 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: zhouquan, anup, atishp, paul.walmsley, palmer, linux-kernel,
	linux-riscv, kvm, kvm-riscv

On Wed, Jun 11, 2025 at 09:17:36AM -0700, Sean Christopherson wrote:
> On Wed, Jun 11, 2025, Andrew Jones wrote:
> > On Wed, Jun 11, 2025 at 05:51:40PM +0800, zhouquan@iscas.ac.cn wrote:
> > > From: Quan Zhou <zhouquan@iscas.ac.cn>
> > > 
> > > The caller has already passed in the memslot, and there are
> > > two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving
> > > the memslot again in `kvm_riscv_gstage_map`, we can replace them
> > > with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`.
> > > 
> > > Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn>
> > > ---
> > >  arch/riscv/kvm/mmu.c | 5 +++--
> > >  1 file changed, 3 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > > index 1087ea74567b..f9059dac3ba3 100644
> > > --- a/arch/riscv/kvm/mmu.c
> > > +++ b/arch/riscv/kvm/mmu.c
> > > @@ -648,7 +648,8 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> > >  		return -EFAULT;
> > >  	}
> > >  
> > > -	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
> > > +	hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
> > > +				&writable, &page);
> > 
> > I think introducing another function with the following diff would be
> > better than duplicating the is_write to foll translation.
> 
> NAK, I don't want an explosion of wrapper APIs (especially with boolean params).
> 
> I 100% agree that it's mildly annoying to force arch code to do convert "write"
> to FOLL_WRITE, but that's a symptom of KVM not providing a common structure for
> passing page fault information.
> 
> What I want to get to is a set of APIs that look something the below (very off
> the cuff), not add more wrappers and put KVM back in a situation where there are
> a bajillion ways to do the same basic thing.
> 
> struct kvm_page_fault {
> 	const gpa_t addr;
> 	const bool exec;
> 	const bool write;
> 	const bool present;
> 
> 	gfn_t gfn;
> 
> 	/* The memslot containing gfn. May be NULL. */
> 	struct kvm_memory_slot *slot;
> 
> 	/* Outputs */
> 	unsigned long mmu_seq;
> 	kvm_pfn_t pfn;
> 	struct page *refcounted_page;
> 	bool map_writable;
> };
> 
> kvm_pfn_t __kvm_faultin_pfn(struct kvm_page_fault *fault, unsigned int flags)
> {
> 	struct kvm_follow_pfn kfp = {
> 		.slot = fault->slot,
> 		.gfn = fault->gfn,
> 		.flags = flags | fault->write ? FOLL_WRITE : 0,
> 		.map_writable = &fault->writable,
> 		.refcounted_page = &fault->refcounted_page,
> 	};
> 
> 	fault->writable = false;
> 	fault->refcounted_page = NULL;
> 
> 	return kvm_follow_pfn(&kfp);
> }
> EXPORT_SYMBOL_GPL(__kvm_faultin_pfn);
> 
> kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool write,
> 			  bool *writable, struct page **refcounted_page)
> {
> 	struct kvm_follow_pfn kfp = {
> 		.slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn),,
> 		.gfn = gfn,
> 		.flags = write ? FOLL_WRITE : 0,
> 		.map_writable = writable,
> 		.refcounted_page = refcounted_page,
> 	};
> 
> 	if (WARN_ON_ONCE(!writable || !refcounted_page))
> 		return KVM_PFN_ERR_FAULT;
> 
> 	*writable = false;
> 	*refcounted_page = NULL;
> 
> 	return kvm_follow_pfn(&kfp);
> }
> EXPORT_SYMBOL_GPL(__kvm_faultin_pfn);
> 
> 
> To get things started, I proposed moving "struct kvm_page_fault" to common code
> so that it can be shared by x86 and arm64 as part of the KVM userfault series.
> But I'd be more than happy to acclerate the standardization of "struct kvm_page_fault"
> if we want to get there sooner than later.
> 
> [*] https://lore.kernel.org/all/aBqlkz1bqhu-9toV@google.com
> 
> In the meantime, RISC-V can start preparing for that future, and clean up its
> code in the process.
> 
> E.g. "fault_addr" should be "gpa_t", not "unsigned long".  If 32-bit RISC-V
> is strictly limited to 32-bit _physical_ addresses in the *architecture*, then
> gpa_t should probably be tweaked accordingly.

32-bit riscv supports 34-bit physical addresses, so fault_addr should
indeed be gpa_t.

> 
> And vma_pageshift should be "unsigned int", not "short".

Yes, particularly because huge_page_shift() returns unsigned int which may
be used to assign vma_pageshift.

> 
> Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
> and will generate what's probably a spurious kvm_err() message.

On 32-bit riscv, due to losing the upper bits of the physical address? Or
is there yet another thing to fix?

> 
> E.g. in the short term:

The diff looks good to me, should I test and post it for you?

Thanks,
drew

> 
> ---
>  arch/riscv/include/asm/kvm_host.h |  5 ++--
>  arch/riscv/kvm/mmu.c              | 49 +++++++++++++++++++++----------
>  arch/riscv/kvm/vcpu_exit.c        | 40 +------------------------
>  3 files changed, 36 insertions(+), 58 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index 85cfebc32e4c..84c5db715ba5 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -361,9 +361,8 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>  			     bool writable, bool in_atomic);
>  void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
>  			      unsigned long size);
> -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> -			 struct kvm_memory_slot *memslot,
> -			 gpa_t gpa, unsigned long hva, bool is_write);
> +int kvm_riscv_gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +				struct kvm_cpu_trap *trap);
>  int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm);
>  void kvm_riscv_gstage_free_pgd(struct kvm *kvm);
>  void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu);
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 1087ea74567b..3b0afc1c0832 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -586,22 +586,37 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  	return pte_young(ptep_get(ptep));
>  }
>  
> -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> -			 struct kvm_memory_slot *memslot,
> -			 gpa_t gpa, unsigned long hva, bool is_write)
> +int kvm_riscv_gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +				struct kvm_cpu_trap *trap)
>  {
> -	int ret;
> -	kvm_pfn_t hfn;
> -	bool writable;
> -	short vma_pageshift;
> +
> +	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
> +	gpa_t gpa = (trap->htval << 2) | (trap->stval & 0x3);
>  	gfn_t gfn = gpa >> PAGE_SHIFT;
> -	struct vm_area_struct *vma;
> +	struct kvm_memory_slot *memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
> +	bool logging = memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY);
> +	bool write = trap->scause == EXC_STORE_GUEST_PAGE_FAULT;
> +	bool read =  trap->scause == EXC_LOAD_GUEST_PAGE_FAULT;
> +	unsigned int flags = write ? FOLL_WRITE : 0;
> +	unsigned long hva, vma_pagesize, mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
> -	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
> -	bool logging = (memslot->dirty_bitmap &&
> -			!(memslot->flags & KVM_MEM_READONLY)) ? true : false;
> -	unsigned long vma_pagesize, mmu_seq;
> +	unsigned int vma_pageshift;
> +	struct vm_area_struct *vma;
>  	struct page *page;
> +	kvm_pfn_t hfn;
> +	bool writable;
> +	int ret;
> +
> +	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> +	if (kvm_is_error_hva(hva) || (write && !writable)) {
> +		if (read)
> +			return kvm_riscv_vcpu_mmio_load(vcpu, run, gpa,
> +							trap->htinst);
> +		if (write)
> +			return kvm_riscv_vcpu_mmio_store(vcpu, run, gpa,
> +							 trap->htinst);
> +		return -EOPNOTSUPP;
> +	}
>  
>  	/* We need minimum second+third level pages */
>  	ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels);
> @@ -648,7 +663,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  		return -EFAULT;
>  	}
>  
> -	hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
> +	hfn = __kvm_faultin_pfn(memslot, gfn, flags, &writable, &page);
>  	if (hfn == KVM_PFN_ERR_HWPOISON) {
>  		send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
>  				vma_pageshift, current);
> @@ -661,7 +676,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  	 * If logging is active then we allow writable pages only
>  	 * for write faults.
>  	 */
> -	if (logging && !is_write)
> +	if (logging && !write)
>  		writable = false;
>  
>  	spin_lock(&kvm->mmu_lock);
> @@ -677,14 +692,16 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
>  				      vma_pagesize, true, true);
>  	}
> +	if (ret == -EEXIST)
> +		ret = 0;
>  
>  	if (ret)
>  		kvm_err("Failed to map in G-stage\n");
>  
>  out_unlock:
> -	kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable);
> +	kvm_release_faultin_page(kvm, page, ret, writable);
>  	spin_unlock(&kvm->mmu_lock);
> -	return ret;
> +	return ret ? ret : 1;
>  }
>  
>  int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm)
> diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
> index 6e0c18412795..6f07077068f6 100644
> --- a/arch/riscv/kvm/vcpu_exit.c
> +++ b/arch/riscv/kvm/vcpu_exit.c
> @@ -10,44 +10,6 @@
>  #include <asm/csr.h>
>  #include <asm/insn-def.h>
>  
> -static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -			     struct kvm_cpu_trap *trap)
> -{
> -	struct kvm_memory_slot *memslot;
> -	unsigned long hva, fault_addr;
> -	bool writable;
> -	gfn_t gfn;
> -	int ret;
> -
> -	fault_addr = (trap->htval << 2) | (trap->stval & 0x3);
> -	gfn = fault_addr >> PAGE_SHIFT;
> -	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> -	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> -
> -	if (kvm_is_error_hva(hva) ||
> -	    (trap->scause == EXC_STORE_GUEST_PAGE_FAULT && !writable)) {
> -		switch (trap->scause) {
> -		case EXC_LOAD_GUEST_PAGE_FAULT:
> -			return kvm_riscv_vcpu_mmio_load(vcpu, run,
> -							fault_addr,
> -							trap->htinst);
> -		case EXC_STORE_GUEST_PAGE_FAULT:
> -			return kvm_riscv_vcpu_mmio_store(vcpu, run,
> -							 fault_addr,
> -							 trap->htinst);
> -		default:
> -			return -EOPNOTSUPP;
> -		};
> -	}
> -
> -	ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva,
> -		(trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false);
> -	if (ret < 0)
> -		return ret;
> -
> -	return 1;
> -}
> -
>  /**
>   * kvm_riscv_vcpu_unpriv_read -- Read machine word from Guest memory
>   *
> @@ -229,7 +191,7 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	case EXC_LOAD_GUEST_PAGE_FAULT:
>  	case EXC_STORE_GUEST_PAGE_FAULT:
>  		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
> -			ret = gstage_page_fault(vcpu, run, trap);
> +			ret = kvm_riscv_gstage_page_fault(vcpu, run, trap);
>  		break;
>  	case EXC_SUPERVISOR_SYSCALL:
>  		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
> 
> base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
> -- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-12  9:42     ` Andrew Jones
@ 2025-06-13 22:29       ` Sean Christopherson
  2025-06-15 16:27         ` Anup Patel
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-06-13 22:29 UTC (permalink / raw)
  To: Andrew Jones
  Cc: zhouquan, anup, atishp, paul.walmsley, palmer, linux-kernel,
	linux-riscv, kvm, kvm-riscv

On Thu, Jun 12, 2025, Andrew Jones wrote:
> On Wed, Jun 11, 2025 at 09:17:36AM -0700, Sean Christopherson wrote:
> > Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
> > and will generate what's probably a spurious kvm_err() message.
> 
> On 32-bit riscv, due to losing the upper bits of the physical address? Or
> is there yet another thing to fix?

Another bug, I think.  gstage_set_pte() returns -EEXIST if a PTE exists, and I
_assume_ that's supposed to be benign?  But this code returns it blindly:

	if (writable) {
		mark_page_dirty(kvm, gfn);
		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
				      vma_pagesize, false, true);
	} else {
		ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
				      vma_pagesize, true, true);
	}

	if (ret)
		kvm_err("Failed to map in G-stage\n");

out_unlock:
	kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable);
	spin_unlock(&kvm->mmu_lock);
	return ret;

and gstage_page_fault() forwards negative return codes:

	ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva,
		(trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false);
	if (ret < 0)
		return ret;

and so eventually -EEXIST will propagate to userspace.

I haven't looked too closely at the RISC-V MMU, but I would be surprised if
encountering what ends up being a spurious fault is completely impossible.

> The diff looks good to me, should I test and post it for you?

If you test it, I'll happily write changelogs and post patches.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-13 22:29       ` Sean Christopherson
@ 2025-06-15 16:27         ` Anup Patel
  2025-06-17 14:36           ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Anup Patel @ 2025-06-15 16:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrew Jones, zhouquan, anup, atishp, paul.walmsley, palmer,
	linux-kernel, linux-riscv, kvm, kvm-riscv

On Sat, Jun 14, 2025 at 3:59 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Jun 12, 2025, Andrew Jones wrote:
> > On Wed, Jun 11, 2025 at 09:17:36AM -0700, Sean Christopherson wrote:
> > > Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
> > > and will generate what's probably a spurious kvm_err() message.
> >
> > On 32-bit riscv, due to losing the upper bits of the physical address? Or
> > is there yet another thing to fix?
>
> Another bug, I think.  gstage_set_pte() returns -EEXIST if a PTE exists, and I
> _assume_ that's supposed to be benign?  But this code returns it blindly:

gstage_set_pte() returns -EEXIST only when it was expecting a non-leaf
PTE at a particular level but got a leaf PTE otherwise it returns 0 if leaf PTE
is at expected level. This allows gstage_set_pte() to work when an existing
PTE is being modified. I think the below change is not needed unless I am
totally missing something.

>
>         if (writable) {
>                 mark_page_dirty(kvm, gfn);
>                 ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
>                                       vma_pagesize, false, true);
>         } else {
>                 ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
>                                       vma_pagesize, true, true);
>         }
>
>         if (ret)
>                 kvm_err("Failed to map in G-stage\n");
>
> out_unlock:
>         kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable);
>         spin_unlock(&kvm->mmu_lock);
>         return ret;
>
> and gstage_page_fault() forwards negative return codes:
>
>         ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva,
>                 (trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false);
>         if (ret < 0)
>                 return ret;
>
> and so eventually -EEXIST will propagate to userspace.
>
> I haven't looked too closely at the RISC-V MMU, but I would be surprised if
> encountering what ends up being a spurious fault is completely impossible.
>
> > The diff looks good to me, should I test and post it for you?
>
> If you test it, I'll happily write changelogs and post patches.
>

Regards,
Anup

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-15 16:27         ` Anup Patel
@ 2025-06-17 14:36           ` Sean Christopherson
  2025-06-19  7:04             ` Anup Patel
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-06-17 14:36 UTC (permalink / raw)
  To: Anup Patel
  Cc: Andrew Jones, zhouquan, anup, atishp, paul.walmsley, palmer,
	linux-kernel, linux-riscv, kvm, kvm-riscv

On Sun, Jun 15, 2025, Anup Patel wrote:
> On Sat, Jun 14, 2025 at 3:59 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Thu, Jun 12, 2025, Andrew Jones wrote:
> > > On Wed, Jun 11, 2025 at 09:17:36AM -0700, Sean Christopherson wrote:
> > > > Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
> > > > and will generate what's probably a spurious kvm_err() message.
> > >
> > > On 32-bit riscv, due to losing the upper bits of the physical address? Or
> > > is there yet another thing to fix?
> >
> > Another bug, I think.  gstage_set_pte() returns -EEXIST if a PTE exists, and I
> > _assume_ that's supposed to be benign?  But this code returns it blindly:
> 
> gstage_set_pte() returns -EEXIST only when it was expecting a non-leaf
> PTE at a particular level but got a leaf PTE 

Right, but isn't returning -EEXIST all the way to userspace undesirable behavior?

E.g. in this sequence, KVM will return -EEXIST and incorrectly terminate the VM
(assuming the VMM doesn't miraculously recover somehow):

 1. Back the VM with HugeTLBFS
 2. Fault-in memory, i.e. create hugepage mappings
 3. Enable KVM_MEM_LOG_DIRTY_PAGES
 4. Write-protection fault, kvm_riscv_gstage_map() tries to create a writable
    non-huge mapping.
 5. gstage_set_pte() encounters the huge leaf PTE before reaching the target
    level, and returns -EEXIST.

AFAICT, gstage_wp_memory_region() doesn't split/shatter/demote hugepages, it
simply clears _PAGE_WRITE.

It's entirely possible I'm missing something that makes the above scenario
impossible in practice, but at this point I'm genuinely curious :-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-17 14:36           ` Sean Christopherson
@ 2025-06-19  7:04             ` Anup Patel
  0 siblings, 0 replies; 9+ messages in thread
From: Anup Patel @ 2025-06-19  7:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Anup Patel, Andrew Jones, zhouquan, atishp, paul.walmsley, palmer,
	linux-kernel, linux-riscv, kvm, kvm-riscv

On Tue, Jun 17, 2025 at 8:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Sun, Jun 15, 2025, Anup Patel wrote:
> > On Sat, Jun 14, 2025 at 3:59 AM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Thu, Jun 12, 2025, Andrew Jones wrote:
> > > > On Wed, Jun 11, 2025 at 09:17:36AM -0700, Sean Christopherson wrote:
> > > > > Looks like y'all also have a bug where an -EEXIST will be returned to userspace,
> > > > > and will generate what's probably a spurious kvm_err() message.
> > > >
> > > > On 32-bit riscv, due to losing the upper bits of the physical address? Or
> > > > is there yet another thing to fix?
> > >
> > > Another bug, I think.  gstage_set_pte() returns -EEXIST if a PTE exists, and I
> > > _assume_ that's supposed to be benign?  But this code returns it blindly:
> >
> > gstage_set_pte() returns -EEXIST only when it was expecting a non-leaf
> > PTE at a particular level but got a leaf PTE
>
> Right, but isn't returning -EEXIST all the way to userspace undesirable behavior?
>
> E.g. in this sequence, KVM will return -EEXIST and incorrectly terminate the VM
> (assuming the VMM doesn't miraculously recover somehow):
>
>  1. Back the VM with HugeTLBFS
>  2. Fault-in memory, i.e. create hugepage mappings
>  3. Enable KVM_MEM_LOG_DIRTY_PAGES
>  4. Write-protection fault, kvm_riscv_gstage_map() tries to create a writable
>     non-huge mapping.
>  5. gstage_set_pte() encounters the huge leaf PTE before reaching the target
>     level, and returns -EEXIST.

The gstage_set_pte() does not fail in any of the above cases because the
desired page table "level" of the PTE is passed to gstage_set_pte() as
parameter. The -EEXIST failure is only when gstage_set_pte() sees an
existing leaf PTE at a level above the desired page table level which can
only occur if there is some BUG in KVM g-stage programming.

>
> AFAICT, gstage_wp_memory_region() doesn't split/shatter/demote hugepages, it
> simply clears _PAGE_WRITE.
>
> It's entirely possible I'm missing something that makes the above scenario
> impossible in practice, but at this point I'm genuinely curious :-)

The -EEXIST failure in gstage_set_pte() is very unlikely to happen but I see
your point about unnecessarily exiting to user space since user space has
nothing to do with this failure.

I think it's better to WARN() and return 0 instead of returning -EEXIST.

Regards,
Anup

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
  2025-06-11  9:51 [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map() zhouquan
  2025-06-11 11:29 ` Andrew Jones
@ 2025-07-17 12:03 ` Anup Patel
  1 sibling, 0 replies; 9+ messages in thread
From: Anup Patel @ 2025-07-17 12:03 UTC (permalink / raw)
  To: zhouquan
  Cc: ajones, atishp, paul.walmsley, palmer, linux-kernel, linux-riscv,
	kvm, kvm-riscv

On Wed, Jun 11, 2025 at 3:30 PM <zhouquan@iscas.ac.cn> wrote:
>
> From: Quan Zhou <zhouquan@iscas.ac.cn>
>
> The caller has already passed in the memslot, and there are
> two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving
> the memslot again in `kvm_riscv_gstage_map`, we can replace them
> with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`.
>
> Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn>

LGTM.

Reviewed-by: Anup Patel <anup@brainfault.org>

Queued this patch for Linux-6.17

Thanks,
Anup

> ---
>  arch/riscv/kvm/mmu.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 1087ea74567b..f9059dac3ba3 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -648,7 +648,8 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>                 return -EFAULT;
>         }
>
> -       hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
> +       hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
> +                               &writable, &page);
>         if (hfn == KVM_PFN_ERR_HWPOISON) {
>                 send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
>                                 vma_pageshift, current);
> @@ -670,7 +671,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>                 goto out_unlock;
>
>         if (writable) {
> -               mark_page_dirty(kvm, gfn);
> +               mark_page_dirty_in_slot(kvm, memslot, gfn);
>                 ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT,
>                                       vma_pagesize, false, true);
>         } else {
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-07-17 12:03 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-11  9:51 [PATCH] RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map() zhouquan
2025-06-11 11:29 ` Andrew Jones
2025-06-11 16:17   ` Sean Christopherson
2025-06-12  9:42     ` Andrew Jones
2025-06-13 22:29       ` Sean Christopherson
2025-06-15 16:27         ` Anup Patel
2025-06-17 14:36           ` Sean Christopherson
2025-06-19  7:04             ` Anup Patel
2025-07-17 12:03 ` Anup Patel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).