linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaoqin Huang <shahuang@redhat.com>
To: Raghavendra Rao Ananta <rananta@google.com>,
	Oliver Upton <oliver.upton@linux.dev>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Zenghui Yu <yuzenghui@huawei.com>,
	Anup Patel <anup@brainfault.org>,
	Atish Patra <atishp@atishpatra.org>,
	Jing Zhang <jingzhangos@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	David Matlack <dmatlack@google.com>,
	Fuad Tabba <tabba@google.com>,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org
Subject: Re: [PATCH v8 14/14] KVM: arm64: Use TLBI range-based intructions for unmap
Date: Fri, 11 Aug 2023 11:13:46 +0800	[thread overview]
Message-ID: <5ca5e4ed-82f0-369b-db61-7fcd1c148f1c@redhat.com> (raw)
In-Reply-To: <20230808231330.3855936-15-rananta@google.com>



On 8/9/23 07:13, Raghavendra Rao Ananta wrote:
> The current implementation of the stage-2 unmap walker traverses
> the given range and, as a part of break-before-make, performs
> TLB invalidations with a DSB for every PTE. A multitude of this
> combination could cause a performance bottleneck on some systems.
> 
> Hence, if the system supports FEAT_TLBIRANGE, defer the TLB
> invalidations until the entire walk is finished, and then
> use range-based instructions to invalidate the TLBs in one go.
> Condition deferred TLB invalidation on the system supporting FWB,
> as the optimization is entirely pointless when the unmap walker
> needs to perform CMOs.
> 
> Rename stage2_put_pte() to stage2_unmap_put_pte() as the function
> now serves the stage-2 unmap walker specifically, rather than
> acting generic.
> 
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> ---
>   arch/arm64/kvm/hyp/pgtable.c | 40 +++++++++++++++++++++++++++++-------
>   1 file changed, 33 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 5ef098af17362..eaaae76481fa9 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -831,16 +831,36 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n
>   	smp_store_release(ctx->ptep, new);
>   }
>   
> -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
> -			   struct kvm_pgtable_mm_ops *mm_ops)
> +static bool stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt)
>   {
>   	/*
> -	 * Clear the existing PTE, and perform break-before-make with
> -	 * TLB maintenance if it was valid.
> +	 * If FEAT_TLBIRANGE is implemented, defer the individual
> +	 * TLB invalidations until the entire walk is finished, and
> +	 * then use the range-based TLBI instructions to do the
> +	 * invalidations. Condition deferred TLB invalidation on the
> +	 * system supporting FWB as the optimization is entirely
> +	 * pointless when the unmap walker needs to perform CMOs.
> +	 */
> +	return system_supports_tlb_range() && stage2_has_fwb(pgt);
> +}
> +
> +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx,
> +				struct kvm_s2_mmu *mmu,
> +				struct kvm_pgtable_mm_ops *mm_ops)
> +{
> +	struct kvm_pgtable *pgt = ctx->arg;
> +
> +	/*
> +	 * Clear the existing PTE, and perform break-before-make if it was
> +	 * valid. Depending on the system support, defer the TLB maintenance
> +	 * for the same until the entire unmap walk is completed.
>   	 */
>   	if (kvm_pte_valid(ctx->old)) {
>   		kvm_clear_pte(ctx->ptep);
> -		kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level);
> +
> +		if (!stage2_unmap_defer_tlb_flush(pgt))
> +			kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu,
> +					ctx->addr, ctx->level);
>   	}
>   
>   	mm_ops->put_page(ctx->ptep);
> @@ -1098,7 +1118,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
>   	 * block entry and rely on the remaining portions being faulted
>   	 * back lazily.
>   	 */
> -	stage2_put_pte(ctx, mmu, mm_ops);
> +	stage2_unmap_put_pte(ctx, mmu, mm_ops);
>   
>   	if (need_flush && mm_ops->dcache_clean_inval_poc)
>   		mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops),
> @@ -1112,13 +1132,19 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
>   
>   int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
>   {
> +	int ret;
>   	struct kvm_pgtable_walker walker = {
>   		.cb	= stage2_unmap_walker,
>   		.arg	= pgt,
>   		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
>   	};
>   
> -	return kvm_pgtable_walk(pgt, addr, size, &walker);
> +	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> +	if (stage2_unmap_defer_tlb_flush(pgt))
> +		/* Perform the deferred TLB invalidations */
> +		kvm_tlb_flush_vmid_range(pgt->mmu, addr, size);
> +
> +	return ret;
>   }
>   
>   struct stage2_attr_data {

-- 
Shaoqin


      reply	other threads:[~2023-08-11  3:14 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-08 23:13 [PATCH v8 00/14] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 01/14] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 02/14] KVM: Declare kvm_arch_flush_remote_tlbs() globally Raghavendra Rao Ananta
2023-08-09  4:00   ` Gavin Shan
2023-08-09 16:38     ` Raghavendra Rao Ananta
2023-08-10 12:26       ` Shaoqin Huang
2023-08-10 20:54         ` Raghavendra Rao Ananta
2023-08-10 22:20           ` Sean Christopherson
2023-08-10 22:34             ` Raghavendra Rao Ananta
2023-08-10 23:04               ` Sean Christopherson
2023-08-11  4:09                 ` Raghavendra Rao Ananta
2023-08-11  3:18   ` Shaoqin Huang
2023-08-08 23:13 ` [PATCH v8 03/14] KVM: arm64: Use kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-08-09  4:10   ` Gavin Shan
2023-08-08 23:13 ` [PATCH v8 04/14] KVM: Remove CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL Raghavendra Rao Ananta
2023-08-09  4:11   ` Gavin Shan
2023-08-10  9:04   ` Philippe Mathieu-Daudé
2023-08-08 23:13 ` [PATCH v8 05/14] KVM: Allow range-based TLB invalidation from common code Raghavendra Rao Ananta
2023-08-09  6:09   ` Gavin Shan
2023-08-09 16:41     ` Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 06/14] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to " Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 08/14] arm64: tlb: Implement __flush_s2_tlb_range_op() Raghavendra Rao Ananta
2023-08-11  3:16   ` Shaoqin Huang
2023-08-08 23:13 ` [PATCH v8 09/14] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-08-09  0:43   ` Gavin Shan
2023-08-11  3:15   ` Shaoqin Huang
2023-08-08 23:13 ` [PATCH v8 10/14] KVM: arm64: Define kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 11/14] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() Raghavendra Rao Ananta
2023-08-10  1:40   ` kernel test robot
2023-08-11  3:03   ` Shaoqin Huang
2023-08-08 23:13 ` [PATCH v8 12/14] KVM: arm64: Flush only the memslot after write-protect Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 13/14] KVM: arm64: Invalidate the table entries upon a range Raghavendra Rao Ananta
2023-08-08 23:13 ` [PATCH v8 14/14] KVM: arm64: Use TLBI range-based intructions for unmap Raghavendra Rao Ananta
2023-08-11  3:13   ` Shaoqin Huang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5ca5e4ed-82f0-369b-db61-7fcd1c148f1c@redhat.com \
    --to=shahuang@redhat.com \
    --cc=anup@brainfault.org \
    --cc=atishp@atishpatra.org \
    --cc=chenhuacai@kernel.org \
    --cc=coltonlewis@google.com \
    --cc=dmatlack@google.com \
    --cc=james.morse@arm.com \
    --cc=jingzhangos@google.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=rananta@google.com \
    --cc=reijiw@google.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).