From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9554C77B7A for ; Fri, 19 May 2023 11:50:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XciMxiXBMLKcxyCdINtKD1ivt9eu6mK6qZABZTzr1hA=; b=R7TwgPDXkfuFgf mU4ZWzeGaLioc26iv4ISK4PYodqgGEBSsMEnZ9b9iRcD7aaa83Pdd/Yv1n4EKYSk/9iLTgotR2Ryg t4SXqZj/bpy/2Mz6W6+jtGhYeagP3m9PneDu9Yi8H/mLSDVGxRw27NBvspiJixwXnp7i2zgIiMonv iraqNNeYBQdqQxgupftHyQvZxa42t+Qavg7RtGWC5vElGYHAzinwy74dxxmdU7RAUxpiMdebghEwB 4U4z/j+4njfidw/DF8WHXU+BhmMIZm7R/8qwRiwldq8ZNcKcHzSjSEDLQkTWoP4pfQdzgLc88eNVF Mx2DuGX/ujH9dtcHe4qQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pzyc9-00G9Nn-2m; Fri, 19 May 2023 11:49:41 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pzyc6-00G9MB-2n for linux-arm-kernel@lists.infradead.org; Fri, 19 May 2023 11:49:40 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684496976; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=O4Z2m/dBtl/Kzo/TIsl1VrWST49SCHOBeC3YC1YAZv8=; b=Gk3YGaeBIsPBJovxx8wVsaswzvcvc+24d7cDHJ5z1HXWSNhO8Qie4fwtKxnbnahJVp8x+p VoJksIJKOgoBOP3BaplcGdApaNqLthnpaO4CEwYLj6sfn8Ob8TxcF8Ul89+lcr4Mn3IjkF xeSekjOYLTFxQhwD0OOXpXlC9/3PDfc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-657-c05BJ-oUMSmeP-0vcrYVFw-1; Fri, 19 May 2023 07:49:33 -0400 X-MC-Unique: c05BJ-oUMSmeP-0vcrYVFw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E4A29811E7F; Fri, 19 May 2023 11:49:32 +0000 (UTC) Received: from localhost (ovpn-12-79.pek2.redhat.com [10.72.12.79]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2963A40C6CCC; Fri, 19 May 2023 11:49:31 +0000 (UTC) Date: Fri, 19 May 2023 19:49:28 +0800 From: Baoquan He To: Thomas Gleixner Cc: "Russell King (Oracle)" , Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , John Ogness , linux-arm-kernel@lists.infradead.org, Mark Rutland , Marc Zyngier , x86@kernel.org Subject: Re: Excessive TLB flush ranges Message-ID: References: <87ilcs8zab.ffs@tglx> <87fs7w8z6y.ffs@tglx> <874joc8x7d.ffs@tglx> <87r0rg73wp.ffs@tglx> <87edng6qu8.ffs@tglx> <87y1ln5md2.ffs@tglx> <875y8o5zwm.ffs@tglx> MIME-Version: 1.0 In-Reply-To: <875y8o5zwm.ffs@tglx> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230519_044939_002067_BFB840D7 X-CRM114-Status: GOOD ( 39.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 05/19/23 at 01:22pm, Thomas Gleixner wrote: > On Wed, May 17 2023 at 18:52, Baoquan He wrote: > > On 05/17/23 at 11:38am, Thomas Gleixner wrote: > >> On Tue, May 16 2023 at 21:03, Thomas Gleixner wrote: > >> > > >> > Aside of that, if I read the code correctly then if there is an unmap > >> > via vb_free() which does not cover the whole vmap block then vb->dirty > >> > is set and every _vm_unmap_aliases() invocation flushes that dirty range > >> > over and over until that vmap block is completely freed, no? > >> > >> Something like the below would cure that. > >> > >> While it prevents that this is flushed forever it does not cure the > >> eventually overly broad flush when the block is completely dirty and > >> purged: > >> > >> Assume a block with 1024 pages, where 1022 pages are already freed and > >> TLB flushed. Now the last 2 pages are freed and the block is purged, > >> which results in a flush of 1024 pages where 1022 are already done, > >> right? > > > > This is good idea, I am thinking how to reply to your last mail and how > > to fix this. While your cure code may not work well. Please see below > > inline comment. > > See below. > > > One vmap block has 64 pages. > > #define VMAP_MAX_ALLOC BITS_PER_LONG /* 256K with 4K pages */ > > No, VMAP_MAX_ALLOC is the allocation limit for a single vb_alloc(). > > On 64bit it has at least 128 pages, but can have up to 1024: > > #define VMAP_BBMAP_BITS_MAX 1024 /* 4MB with 4K pages */ > #define VMAP_BBMAP_BITS_MIN (VMAP_MAX_ALLOC*2) > > and then some magic happens to calculate the actual size > > #define VMAP_BBMAP_BITS \ > VMAP_MIN(VMAP_BBMAP_BITS_MAX, \ > VMAP_MAX(VMAP_BBMAP_BITS_MIN, \ > VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16)) > > which is in a range of (2*BITS_PER_LONG) ... 1024. > > The actual vmap block size is: > > #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) You are right, it's 1024. I was dizzy at that time. > > Which is then obviously something between 512k and 4MB on 64bit and > between 256k and 4MB on 32bit. > > >> @@ -2240,13 +2240,17 @@ static void _vm_unmap_aliases(unsigned l > >> rcu_read_lock(); > >> list_for_each_entry_rcu(vb, &vbq->free, free_list) { > >> spin_lock(&vb->lock); > >> - if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > >> + if (vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) { > >> unsigned long va_start = vb->va->va_start; > >> unsigned long s, e; > > > > When vb_free() is invoked, it could cause three kinds of vmap_block as > > below. Your code works well for the 2nd case, for the 1st one, it may be > > not. And the 2nd one is the stuff that we reclaim and put into purge > > list in purge_fragmented_blocks_allcpus(). > > > > 1) > > |-----|------------|-----------|-------| > > |dirty|still mapped| dirty | free | > > > > 2) > > |------------------------------|-------| > > | dirty | free | > > > You sure? The first one is put into the purge list too. No way. You don't copy the essential code here. The key line is calculation of vb->dirty. ->dirty_min and ->dirty_max only provides a loose vlaue for calculating the flush range. Counting more or less page of ->dirty_min or ->dirty_max won't impact much, just make flush do some menningless operation. While counting ->dirty wrong will cause serious problem. If you put case 1 into purge list, freeing it later will fail because you can't find it in vmap_area_root tree. Please check vfree() and remove_vm_area(). /* Expand dirty range */ vb->dirty_min = min(vb->dirty_min, offset); vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); vb->dirty += 1UL << order; Plesae note the judgement of the 2nd case as below: Means there's only free and dirty, and diryt doesn't reach VMAP_BBMAP_BITS, it's case 2. (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) By the way, I made a RFC patchset based on your patch, and your earlier mail in which you raised some questions. I will add it here, please help check if it's worth posting for discussing and reviewing. > > /* Expand dirty range */ > vb->dirty_min = min(vb->dirty_min, offset); > vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); > > pages bits dirtymin dirtymax > vb_alloc(A) 2 0 - 1 VMAP_BBMAP_BITS 0 > vb_alloc(B) 4 2 - 5 > vb_alloc(C) 2 6 - 7 > > So you get three variants: > > 1) Flush after freeing A > > vb_free(A) 2 0 - 1 0 1 > Flush VMAP_BBMAP_BITS 0 <- correct > vb_free(C) 2 6 - 7 6 7 > Flush VMAP_BBMAP_BITS 0 <- correct > > > 2) No flush between freeing A and C > > vb_free(A) 2 0 - 1 0 1 > vb_free(C) 2 6 - 7 0 7 > Flush VMAP_BBMAP_BITS 0 <- overbroad flush > > > 3) No flush between freeing A, C, B > > vb_free(A) 2 0 - 1 0 1 > vb_free(C) 2 6 - 7 0 7 > vb_free(C) 2 2 - 5 0 7 > Flush VMAP_BBMAP_BITS 0 <- correct > > So my quick hack makes it correct for #1 and #3 and prevents repeated > flushes of already flushed areas. > > To prevent #2 you need a bitmap which keeps track of the flushed areas. I made a draft patchset based on your earlier mail, > > Thanks, > > tglx > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel