From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91698C77B75 for ; Tue, 16 May 2023 08:54:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:References :In-Reply-To:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xfx/QH3HgDczczUc0Vfp++zppVHCEp3kLpYMyHaYOXI=; b=Q9A3br5i4SSgUc lc9lCiBZH/Zl5cyWLmzyXn/+nxu1m9KZMb7NcA8tyj4rXpC4qVesrcbUVl2/Ghjsymn/5vnkzvpKR wE7SrAnuWp0UuCH+aw3Ww1xPEv4b8VjLAh9K7gmCKG+MPVNE0wTIgOWjBv+mi/OhvcOZ/0U2l/bnZ Nb0RO7cp7oGchp2cNi/hAt7mtMf7xn/w4Ay6d5cP/pwr+nhVOBNgcUWt3ct3hMWQv2hUWmvOIRk1z eBj1JFdWJFex5Rfp4CAX8xRq6uwsN6dAio6pX2/GEbkmgD3ECGIhc/disRgz+zP10SEugQMWby33r 4xTXGcfzVTulM6bcqqwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pyqRi-004wT6-34; Tue, 16 May 2023 08:54:14 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pyqRg-004wQm-0e for linux-arm-kernel@lists.infradead.org; Tue, 16 May 2023 08:54:13 +0000 From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684227250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jNnvrR2BjKDF8vvkxnd42jhKJvqR6iRkuTZBVInPryA=; b=Feu7eyOs+fbf+tvFxCVUawXX8xL2LqzUhXIfcHyCM5nIcGNGLOLjiZJ5f9RZ3WcG7shGEw RrO320feaYeli1DbHgAGEbrWP1qWnY3SN4JaPQEeoyfju0hzKpi2RhQn/m2x1q1xFhNlt8 m61Zkv6O5zmLveGXho4QuC52wtDap39ZMwnGDiiw32iJZ+I3wpMbq5xBDeZpNunK8QPRuj ILPkYCJq8CAZofkirkNDGVe2PqI8TAtqu0fBz1v33KOFNZtBpYinGkS5ZNkG9khXPA1NwX lVrjRmrq1rjtrtmyvU2bIyvvoJ4NI5fq6PSSxY8dR1OR226dCLeRtTM6qgY2Pw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684227250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jNnvrR2BjKDF8vvkxnd42jhKJvqR6iRkuTZBVInPryA=; b=App82dMC2e7pvShqFM6/jGHDFwzncKS6yP+iV2bwmf+RpYlmWV+2TmzFPGJEfPQIkVbd0t LilRQSVRsFPFM+CQ== To: Baoquan He Cc: Uladzislau Rezki , Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Lorenzo Stoakes , Peter Zijlstra , John Ogness , linux-arm-kernel@lists.infradead.org, Russell King , Mark Rutland , Marc Zyngier Subject: Re: Excessive TLB flush ranges In-Reply-To: References: <87a5y5a6kj.ffs@tglx> <87o7mk93tc.ffs@tglx> Date: Tue, 16 May 2023 10:54:09 +0200 Message-ID: <878rdo8xn2.ffs@tglx> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230516_015412_380693_9ADF3E51 X-CRM114-Status: GOOD ( 19.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, May 16 2023 at 16:07, Baoquan He wrote: > On 05/16/23 at 08:40am, Thomas Gleixner wrote: >> On Tue, May 16 2023 at 10:26, Baoquan He wrote: >> > On 05/15/23 at 08:17pm, Uladzislau Rezki wrote: >> >> For systems which lack a full TLB flush and to flush a long range is >> >> a problem(it takes time), probably we can flush VA one by one. Because >> >> currently we calculate a flush range [min:max] and that range includes >> >> the space that might not be mapped at all. Like below: >> > >> > It's fine if we only calculate a flush range of [min:max] with VA. In >> > vm_reset_perms(), it calculates the flush range with the impacted direct >> > mapping range, then merge it with VA's range. That looks really strange >> > and surprising. If the vm->pages[] are got from a lower part of physical >> > memory, the final merged flush will span tremendous range. Wondering why >> > we need merge the direct map range with VA range, then do flush. Not >> > sure if I misunderstand it. >> >> So what happens on this BPF teardown is: >> >> The vfree(8k) ends up flushing 3 entries. The actual vmalloc part (2) and >> one extra which is in the direct map. I haven't verified that yet, but I >> assume it's the alias of one of the vmalloc'ed pages. > > It looks like the reason. As Uladzislau pointed out, ARCH-es may > have full TLB flush, so won't get trouble from the merged flush > in the calculated [min:max] way, e.g arm64 and x86's flush_tlb_kernel_range(). > However, arm32 seems lacking the ability of full TLB flash. ARM has a full flush, but it does not check for that in flush_tlb_kernel_range(). > If agreed, I can make a draft patch to do the flush for direct map and > VA seperately, see if it works. Of course it works. Already done that. But you are missing the point. Look at the examples I provided. The current implementation ends up doing a full flush on x86 just to flush 3 TLB entries. For the very same reason because the flush range (start..end) becomes insanely large due to the direct map and vmalloc parts. But doing indivudual flushes for direct map and vmalloc space is silly too because then it ends up doing two IPIs instead of one. IPIs are expensive and the whole point of coalescing the flushes is to spare IPIs, no? So with my hacked up flush_tlb_kernel_vas() I end up having exactly _one_ IPI which walks the list and flushes the 3 TLB entries. Thanks, tglx _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel