From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5097AC77B7D for ; Mon, 15 May 2023 21:12:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:References :In-Reply-To:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=O0zgrtuTPetDHByHGYzHjtgIQkAzV4RYMeC1UOCqpjk=; b=xZqkQI7owkp81l 2qKU5/56AL1tKxknkEIZRUVjTk5Ma2i83u/KCOF8ibqGRaOjtN38h5TexgR63SGzUmzr8/VId3eQj UrVJ60YbIGlYHfNTJLFCS/a8DMmHEGhyAoNPHBzdYu5dDJZsfKoPrEZ0AiuP60WBQnR8TIlHnGU/8 nIY3vsh3mEdWPr3MyPYF5Jz3QCuTbTMVlzb/h3xoNT2XCvuMm+6maWH0KYs7oWKzdm56HCr36OvWL 2XRkm2qle7ehOCf6R7ebhiavc8VfL7AxlIUMOf6CIRzTZZE66K0R2Y1i63Igv4ApiMSZvnaYkCwxc aVzbsWu/jRQOfT10fg9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pyfU3-003W7N-1f; Mon, 15 May 2023 21:11:55 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pyfU0-003W4o-15 for linux-arm-kernel@lists.infradead.org; Mon, 15 May 2023 21:11:53 +0000 From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684185105; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Qzz+druBBUZYkIqG0zGjkJ0nslu4gQbC3p4KBl41HPY=; b=my60MP9/JVonqjieyB3Xvh467EaBHhVdPNcNdM2i4cgoY/VaG6Ft7bL0uqckGc3AdRGX6O PL/cusCH/Fsgs1ZYr3bHm0/X/YuSafWKQjNqQJFjgfu/RwO+GQtpiFw6LjJZsBxKzs0LQE XqZp1SjgiBgplVLql4kSdMGnCYXNcjA4aZHppHVgCdG2VgLfJjG83Yq4ELUCyaIn8H1OTf hDVPdep7BzknLg22kkvcaVdxzoVdZCpKNZX+FD0qrl/7U7fjUuKjA374aADuHySQUqsA8+ K1GU3kIhzWiX30UVHRNDlS35n0baireWn+UbMMRbpDI0Zj9IeMTe5yJDFMj76Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684185105; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Qzz+druBBUZYkIqG0zGjkJ0nslu4gQbC3p4KBl41HPY=; b=EdE2vnWMnow5zL7WjSFaZBCVHJW8+PJdSdke3T7nBIdx5fvyHemY5hi3xZonsWIFL4JanF sAgT0jeRW2NfqCAQ== To: "Russell King (Oracle)" Cc: Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He , John Ogness , linux-arm-kernel@lists.infradead.org, Mark Rutland , Marc Zyngier , x86@kernel.org Subject: Re: Excessive TLB flush ranges In-Reply-To: <87353x9y3l.ffs@tglx> References: <87a5y5a6kj.ffs@tglx> <87353x9y3l.ffs@tglx> Date: Mon, 15 May 2023 23:11:45 +0200 Message-ID: <87zg658fla.ffs@tglx> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230515_141152_513923_C61739B4 X-CRM114-Status: GOOD ( 21.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, May 15 2023 at 21:46, Thomas Gleixner wrote: > On Mon, May 15 2023 at 17:59, Russell King wrote: >> On Mon, May 15, 2023 at 06:43:40PM +0200, Thomas Gleixner wrote: > That reproduces in a VM easily and has exactly the same behaviour: > > Extra page[s] via The actual allocation > _vm_unmap_aliases() Pages Pages Flush start Pages > alloc: ffffc9000058e000 2 > free : ffff888144751000 1 ffffc9000058e000 2 ffff888144751000 17312759359 > > alloc: ffffc90000595000 2 > free : ffff8881424f0000 1 ffffc90000595000 2 ffff8881424f0000 17312768167 > > ..... > > seccomp seems to install 29 BPF programs for that process. So on exit() > this results in 29 full TLB flushes on x86, where each of them is used > to flush exactly three TLB entries. > > The actual two page allocation (ffffc9...) is in the vmalloc space, the > extra page (ffff88...) is in the direct mapping. I tried to flush them one by one, which is actually slightly slower. That's not surprising as there are 3 * 29 instead of 29 IPIs and the IPIs dominate the picture. But that's not necessarily true for ARM32 as there are no IPIs involved on the machine we are using, which is a dual-core Cortex-A9. So I came up with the hack below, which is equally fast as the full flush variant while the performance impact on the other CPUs is minimally lower according to perf. That probably should have another argument which tells how many TLBs this flush affects, i.e. 3 in this example, so an architecture can sensibly decide whether it wants to use flush all or not. Thanks, tglx --- --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1728,6 +1728,7 @@ static bool __purge_vmap_area_lazy(unsig unsigned int num_purged_areas = 0; struct list_head local_purge_list; struct vmap_area *va, *n_va; + struct vmap_area tmp = { .va_start = start, .va_end = end }; lockdep_assert_held(&vmap_purge_lock); @@ -1747,7 +1748,12 @@ static bool __purge_vmap_area_lazy(unsig list_last_entry(&local_purge_list, struct vmap_area, list)->va_end); - flush_tlb_kernel_range(start, end); + if (tmp.va_end > tmp.va_start) + list_add(&tmp.list, &local_purge_list); + flush_tlb_kernel_vas(&local_purge_list); + if (tmp.va_end > tmp.va_start) + list_del(&tmp.list); + resched_threshold = lazy_max_pages() << 1; spin_lock(&free_vmap_area_lock); --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -1081,6 +1082,24 @@ void flush_tlb_kernel_range(unsigned lon } } +static void do_flush_vas(void *arg) +{ + struct list_head *list = arg; + struct vmap_area *va; + unsigned long addr; + + list_for_each_entry(va, list, list) { + /* flush range by one by one 'invlpg' */ + for (addr = va->va_start; addr < va->va_end; addr += PAGE_SIZE) + flush_tlb_one_kernel(addr); + } +} + +void flush_tlb_kernel_vas(struct list_head *list) +{ + on_each_cpu(do_flush_vas, list, 1); +} + /* * This can be used from process context to figure out what the value of * CR3 is without needing to do a (slow) __read_cr3(). --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -295,4 +295,6 @@ bool vmalloc_dump_obj(void *object); static inline bool vmalloc_dump_obj(void *object) { return false; } #endif +void flush_tlb_kernel_vas(struct list_head *list); + #endif /* _LINUX_VMALLOC_H */ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel