From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 818341E8320; Sat, 14 Jun 2025 19:12:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.179 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749928381; cv=none; b=LyNV9E+nL7Tu/EolGoQSf1JuOit/PETZpueM8fEyxAYrQIL/Xg8WpxS+Orhau5GMO/zZhMrvmvcPH19azDrVawCIqmKLlaDxGvsDSdPq4PzyfZ7kOvIqhRrONhFxSKaMZbXipz6CJQXC/P/MM680CDWv3u5K27FJttsujtfTXL0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749928381; c=relaxed/simple; bh=YbtbRRP7urQT9m+IVj7NqHNV79vBGrvyMKcqm8kXbq0=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=s4FxVMjNKyIHaNfPGAtvk4++QZ77Pu6vcxanev99uc/FRhQzcU1ztj73sCC6xR/MGFv9DB9sw2C6lGsj5gs6VeIi7msn/hRMIk2rLRtdKFcCfJIGFKZDDpYxvC7k8K7CqLcQz+66Ux2cmTYxI8DVa/sFVkjtoyLlNob4KAUvF0s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=FvIU/MTK; arc=none smtp.client-ip=91.218.175.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="FvIU/MTK" Message-ID: <63b76a34-7475-4a3c-b86d-c355ff928091@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1749928375; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sBKYEBX0zc7qxH+sZf4Kre8KaltJHCtrTLRR16u9vWc=; b=FvIU/MTKHPQQo3k8CDHh6xhIqx5eHpOAblsbxFy0uVan7K65IJrHlaaUUCEqH/4VnhrH7J lVx4rZbjAin6wEDQarTEHq1A3kLedlWn2xgTMIM53oNruEE9d7cBEintQBRnRE0CtvWL5b 8BbGtMuK0tWYf18OXTjUuo+lI0dqtNk= Date: Sat, 14 Jun 2025 12:12:48 -0700 Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH v2 12/12] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs To: Anup Patel Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org References: <20250613065743.737102-1-apatel@ventanamicro.com> <20250613065743.737102-13-apatel@ventanamicro.com> Content-Language: en-US X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Atish Patra In-Reply-To: <20250613065743.737102-13-apatel@ventanamicro.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT On 6/12/25 11:57 PM, Anup Patel wrote: > Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the > host VMID of the Guest/VM which resticts use of these APIs only > for host TLB maintenance. Let's allow passing VMID as a parameter > to all kvm_riscv_hfence_xyz() APIs so that they can be re-used > for nested virtualization related TLB maintenance. > > Signed-off-by: Anup Patel > --- > arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- > arch/riscv/kvm/gstage.c | 3 +- > arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- > arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- > arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- > 5 files changed, 73 insertions(+), 50 deletions(-) > > diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_tlb.h > index f67e03edeaec..38a2f933ad3a 100644 > --- a/arch/riscv/include/asm/kvm_tlb.h > +++ b/arch/riscv/include/asm/kvm_tlb.h > @@ -11,9 +11,11 @@ > enum kvm_riscv_hfence_type { > KVM_RISCV_HFENCE_UNKNOWN = 0, > KVM_RISCV_HFENCE_GVMA_VMID_GPA, > + KVM_RISCV_HFENCE_GVMA_VMID_ALL, > KVM_RISCV_HFENCE_VVMA_ASID_GVA, > KVM_RISCV_HFENCE_VVMA_ASID_ALL, > KVM_RISCV_HFENCE_VVMA_GVA, > + KVM_RISCV_HFENCE_VVMA_ALL > }; > > struct kvm_riscv_hfence { > @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, > void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > gpa_t gpa, gpa_t gpsz, > - unsigned long order); > + unsigned long order, unsigned long vmid); > void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, > - unsigned long hbase, unsigned long hmask); > + unsigned long hbase, unsigned long hmask, > + unsigned long vmid); > void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > unsigned long gva, unsigned long gvsz, > - unsigned long order, unsigned long asid); > + unsigned long order, unsigned long asid, > + unsigned long vmid); > void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > - unsigned long asid); > + unsigned long asid, unsigned long vmid); > void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > unsigned long gva, unsigned long gvsz, > - unsigned long order); > + unsigned long order, unsigned long vmid); > void kvm_riscv_hfence_vvma_all(struct kvm *kvm, > - unsigned long hbase, unsigned long hmask); > + unsigned long hbase, unsigned long hmask, > + unsigned long vmid); > > #endif > diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c > index 9c7c44f09b05..24c270d6d0e2 100644 > --- a/arch/riscv/kvm/gstage.c > +++ b/arch/riscv/kvm/gstage.c > @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage, u32 level, gpa_t addr) > if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) > kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), order); > else > - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), order); > + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), order, > + gstage->vmid); > } > > int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, > diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c > index 349fcfc93f54..3c5a70a2b927 100644 > --- a/arch/riscv/kvm/tlb.c > +++ b/arch/riscv/kvm/tlb.c > @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) > kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, > d.size, d.order); > break; > + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: > + if (kvm_riscv_nacl_available()) > + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); > + else > + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); > + break; > case KVM_RISCV_HFENCE_VVMA_ASID_GVA: > kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); > if (kvm_riscv_nacl_available()) > @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) > kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, > d.size, d.order); > break; > + case KVM_RISCV_HFENCE_VVMA_ALL: > + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); > + if (kvm_riscv_nacl_available()) > + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); > + else > + kvm_riscv_local_hfence_vvma_all(d.vmid); > + break; > default: > break; > } > @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, > void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > gpa_t gpa, gpa_t gpsz, > - unsigned long order) > + unsigned long order, unsigned long vmid) > { > - struct kvm_vmid *v = &kvm->arch.vmid; > struct kvm_riscv_hfence data; > > data.type = KVM_RISCV_HFENCE_GVMA_VMID_GPA; > data.asid = 0; > - data.vmid = READ_ONCE(v->vmid); > + data.vmid = vmid; > data.addr = gpa; > data.size = gpsz; > data.order = order; > @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, > } > > void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, > - unsigned long hbase, unsigned long hmask) > + unsigned long hbase, unsigned long hmask, > + unsigned long vmid) > { > - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, > - KVM_REQ_TLB_FLUSH, NULL); > + struct kvm_riscv_hfence data = {0}; > + > + data.type = KVM_RISCV_HFENCE_GVMA_VMID_ALL; > + data.vmid = vmid; > + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, > + KVM_REQ_TLB_FLUSH, &data); > } > > void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > unsigned long gva, unsigned long gvsz, > - unsigned long order, unsigned long asid) > + unsigned long order, unsigned long asid, > + unsigned long vmid) > { > - struct kvm_vmid *v = &kvm->arch.vmid; > struct kvm_riscv_hfence data; > > data.type = KVM_RISCV_HFENCE_VVMA_ASID_GVA; > data.asid = asid; > - data.vmid = READ_ONCE(v->vmid); > + data.vmid = vmid; > data.addr = gva; > data.size = gvsz; > data.order = order; > @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, > > void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > - unsigned long asid) > + unsigned long asid, unsigned long vmid) > { > - struct kvm_vmid *v = &kvm->arch.vmid; > - struct kvm_riscv_hfence data; > + struct kvm_riscv_hfence data = {0}; > > data.type = KVM_RISCV_HFENCE_VVMA_ASID_ALL; > data.asid = asid; > - data.vmid = READ_ONCE(v->vmid); > - data.addr = data.size = data.order = 0; > + data.vmid = vmid; > make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, > KVM_REQ_HFENCE_VVMA_ALL, &data); > } > @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, > void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, > unsigned long hbase, unsigned long hmask, > unsigned long gva, unsigned long gvsz, > - unsigned long order) > + unsigned long order, unsigned long vmid) > { > - struct kvm_vmid *v = &kvm->arch.vmid; > struct kvm_riscv_hfence data; > > data.type = KVM_RISCV_HFENCE_VVMA_GVA; > data.asid = 0; > - data.vmid = READ_ONCE(v->vmid); > + data.vmid = vmid; > data.addr = gva; > data.size = gvsz; > data.order = order; > @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, > } > > void kvm_riscv_hfence_vvma_all(struct kvm *kvm, > - unsigned long hbase, unsigned long hmask) > + unsigned long hbase, unsigned long hmask, > + unsigned long vmid) > { > - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, > - KVM_REQ_HFENCE_VVMA_ALL, NULL); > + struct kvm_riscv_hfence data = {0}; > + > + data.type = KVM_RISCV_HFENCE_VVMA_ALL; > + data.vmid = vmid; > + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, > + KVM_REQ_HFENCE_VVMA_ALL, &data); > } > > int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) > { > kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, > gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, > - PAGE_SHIFT); > + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); > return 0; > } > diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c > index b17fad091bab..b490ed1428a6 100644 > --- a/arch/riscv/kvm/vcpu_sbi_replace.c > +++ b/arch/riscv/kvm/vcpu_sbi_replace.c > @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run > unsigned long hmask = cp->a0; > unsigned long hbase = cp->a1; > unsigned long funcid = cp->a6; > + unsigned long vmid; > > switch (funcid) { > case SBI_EXT_RFENCE_REMOTE_FENCE_I: > @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run > kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); > break; > case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: > + vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid); > if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) > - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); > + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); This patch doesn't apply cleanly on 6.16-rc1. <<<<<<< HEAD                 if (cp->a2 == 0 && cp->a3 == 0)                         kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); =======                 vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid);                 if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)                         kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); >>>>>>> 57ec61198cc1 (RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs) else                         kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask,                                                   cp->a2, cp->a3, PAGE_SHIFT, vmid);                 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break;         case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: <<<<<<< HEAD                 if (cp->a2 == 0 && cp->a3 == 0) kvm_riscv_hfence_vvma_asid_all(vcpu->kvm,                                                        hbase, hmask, cp->a4); =======                 vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid);                 if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask,                                                        cp->a4, vmid); >>>>>>> 57ec61198cc1 (RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs) > else > kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, > - cp->a2, cp->a3, PAGE_SHIFT); > + cp->a2, cp->a3, PAGE_SHIFT, vmid); > kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); > break; > case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: > + vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid); > if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) > - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, > - hbase, hmask, cp->a4); > + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, > + cp->a4, vmid); > else > - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, > - hbase, hmask, > - cp->a2, cp->a3, > - PAGE_SHIFT, cp->a4); > + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, > + cp->a3, PAGE_SHIFT, cp->a4, vmid); > kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); > break; > case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: > diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c > index 8f4c4fa16227..368dfddd23d9 100644 > --- a/arch/riscv/kvm/vcpu_sbi_v01.c > +++ b/arch/riscv/kvm/vcpu_sbi_v01.c > @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, > struct kvm *kvm = vcpu->kvm; > struct kvm_cpu_context *cp = &vcpu->arch.guest_context; > struct kvm_cpu_trap *utrap = retdata->utrap; > + unsigned long vmid; > > switch (cp->a7) { > case SBI_EXT_0_1_CONSOLE_GETCHAR: > @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, > if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I) > kvm_riscv_fence_i(vcpu->kvm, 0, hmask); > else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA) { > + vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid); > if (cp->a1 == 0 && cp->a2 == 0) > - kvm_riscv_hfence_vvma_all(vcpu->kvm, > - 0, hmask); > + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); > else > - kvm_riscv_hfence_vvma_gva(vcpu->kvm, > - 0, hmask, > - cp->a1, cp->a2, > - PAGE_SHIFT); > + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, > + cp->a2, PAGE_SHIFT, vmid); > } else { > + vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid); > if (cp->a1 == 0 && cp->a2 == 0) > - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, > - 0, hmask, > - cp->a3); > + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, > + cp->a3, vmid); > else > - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, > - 0, hmask, > - cp->a1, cp->a2, > - PAGE_SHIFT, > - cp->a3); > + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, > + cp->a1, cp->a2, PAGE_SHIFT, > + cp->a3, vmid); > } > break; > default: