From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A258C77B7A for ; Tue, 16 May 2023 10:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4fjfxuQqByttOsheyXjSiIlSSwbSlacOb34nSyPPxWI=; b=MIOr1Z+q5wx2YG wwxVABNzA3XYnlSN2UZvYlgMcHzhXzIjrCvGGaN8SWG147roHiPCLekutDo5BSz6fYxw7QQbqI2Ay MK3n2axO/fOy/67hLjSbNNMxjrPOpwl6v/IzeG3ALlRlNWyiQeSEU1MhRZjjCuyEFg3I4NkvVKnUt GP8b4+WRALdiprcgMX4vRQasMWoBfr1z0UCaDOm4/aazEZMaFLF9ZnPxtcnew0oC2NXE6yVKDtZy6 SCa0ir1GyAWcUYUf+qPIeSRIkejJsW9WARW4vxc+pZlKY3DLJvrsiESZhwzr75XpcgxcPVhqrK+1t zwg06k9JrZBGKHHigKGw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pyrYq-005Bgi-1D; Tue, 16 May 2023 10:05:40 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pyrYm-005Bf0-2o for linux-arm-kernel@lists.infradead.org; Tue, 16 May 2023 10:05:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684231535; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ms0GeZSN+Pyh2iBTX2jaVNvVybqQDgxDiZGPMEWHKsg=; b=agA3p7sgjSsoEnZgP4BxcszMhaRJvNQGUJaTrIviOdylPrJ/4nfZJiQg9/HaRLvJCf+dax wv72T/8UI70QTAatsHpTI2XqEJn39d5mCFSyyiVtUFedSpfpcbWAVtlUwFFhNMEXk7ORDL ErLO1ICUqZuGeKXx4t1LUjyD2ZGe6dA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-369-IeJjDBWgMMm3dic2cSXE9w-1; Tue, 16 May 2023 06:05:32 -0400 X-MC-Unique: IeJjDBWgMMm3dic2cSXE9w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 475A5101A47E; Tue, 16 May 2023 10:05:31 +0000 (UTC) Received: from localhost (ovpn-13-34.pek2.redhat.com [10.72.13.34]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 458B940C6EC4; Tue, 16 May 2023 10:05:29 +0000 (UTC) Date: Tue, 16 May 2023 18:05:26 +0800 From: Baoquan He To: Thomas Gleixner Cc: "Russell King (Oracle)" , Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , John Ogness , linux-arm-kernel@lists.infradead.org, Mark Rutland , Marc Zyngier , x86@kernel.org Subject: Re: Excessive TLB flush ranges Message-ID: References: <87a5y5a6kj.ffs@tglx> <87353x9y3l.ffs@tglx> <87zg658fla.ffs@tglx> <87r0rg93z5.ffs@tglx> <87ilcs8zab.ffs@tglx> <87fs7w8z6y.ffs@tglx> <874joc8x7d.ffs@tglx> MIME-Version: 1.0 In-Reply-To: <874joc8x7d.ffs@tglx> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230516_030536_989666_0B2864E1 X-CRM114-Status: GOOD ( 18.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 05/16/23 at 11:03am, Thomas Gleixner wrote: ...... > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1724,7 +1724,8 @@ static void purge_fragmented_blocks_allc > */ > static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) > { > - unsigned long resched_threshold; > + unsigned long resched_threshold, num_entries = 0, num_alias_entries = 0; > + struct vmap_area alias_va = { .va_start = start, .va_end = end }; Note that the start and end passed in are not only direct map which is alias of va. It is the range which has done merging on direct map range and all dirty range of vbq in _vm_unmap_aliases(). We may need to append below draft code on your patch to at least flush the direct map range separately. diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 7672c2422f0c..beaaa2f983d3 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1722,13 +1722,15 @@ static void purge_fragmented_blocks_allcpus(void); /* * Purges all lazily-freed vmap areas. */ -static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +static bool __purge_vmap_area_lazy(struct *range) { unsigned long resched_threshold, num_entries = 0, num_alias_entries = 0; - struct vmap_area alias_va = { .va_start = start, .va_end = end }; + struct vmap_area alias_va = { .va_start = range[0].start, .va_end = range[0].end }; + struct vmap_area dirty_va = { .va_start = range[1].start, .va_end = range[1].end }; unsigned int num_purged_areas = 0; struct list_head local_purge_list; struct vmap_area *va, *n_va; + unsigned long start = ULONG_MAX, end = 0; lockdep_assert_held(&vmap_purge_lock); @@ -1737,6 +1739,10 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) list_replace_init(&purge_vmap_area_list, &local_purge_list); spin_unlock(&purge_vmap_area_lock); + start = alias_va.va_start; + end = alias_va.va_end; + start = min(start, dirty_va.va_start); + end = min(start, dirty_va.va_end); start = min(start, list_first_entry(&local_purge_list, struct vmap_area, list)->va_start); end = max(end, list_last_entry(&local_purge_list, struct vmap_area, list)->va_end); @@ -1752,6 +1758,10 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) list_add(&alias_va.list, &local_purge_list); } + if (dirty_va.va_end > dirty_va.va_start) { + num_alias_entries = (dirty_va.va_end - dirty_va.va_start) >> PAGE_SHIFT; + list_add(&dirty_va.list, &local_purge_list); + } flush_tlb_kernel_vas(&local_purge_list, num_entries + num_alias_entries); if (num_alias_entries) @@ -2236,15 +2246,18 @@ static void vb_free(unsigned long addr, unsigned long size) spin_unlock(&vb->lock); } -static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) +static void _vm_unmap_aliases(unsigned long dm_start, unsigned long dm_end, int flush) { int cpu; + struct range range[2]; + unsigned long start = ULONG_MAX, end = 0; if (unlikely(!vmap_initialized)) return; might_sleep(); + range[0] = {dm_start, dm_end}; for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb; @@ -2269,6 +2282,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) rcu_read_unlock(); } + range[1] = {start, end}; mutex_lock(&vmap_purge_lock); purge_fragmented_blocks_allcpus(); if (!__purge_vmap_area_lazy(start, end) && flush) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel