From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nicholas Piggin Subject: [RFC PATCH 1/3] Revert "mm: always flush VMA ranges affected by zap_page_range" Date: Tue, 12 Jun 2018 17:16:19 +1000 Message-ID: <20180612071621.26775-2-npiggin@gmail.com> References: <20180612071621.26775-1-npiggin@gmail.com> Return-path: In-Reply-To: <20180612071621.26775-1-npiggin@gmail.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+glppe-linuxppc-embedded-2=m.gmane.org@lists.ozlabs.org Sender: "Linuxppc-dev" To: linux-mm@kvack.org Cc: linux-arch@vger.kernel.org, Nadav Amit , Mel Gorman , Linus Torvalds , Nicholas Piggin , Minchan Kim , "Aneesh Kumar K . V" , Andrew Morton , linuxppc-dev@lists.ozlabs.org List-Id: linux-arch.vger.kernel.org This reverts commit 4647706ebeee6e50f7b9f922b095f4ec94d581c3. Patch 99baac21e4585 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") provides a superset of the TLB flush coverage of this commit, and even includes in the changelog "this patch supersedes 'mm: Always flush VMA ranges affected by zap_page_range v2'". Reverting this avoids double flushing the TLB range, and the less efficient flush_tlb_range() call (the mmu_gather API is more precise about what ranges it invalidates). Signed-off-by: Nicholas Piggin --- mm/memory.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7206a634270b..9d472e00fc2d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1603,20 +1603,8 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, mm, start, end); update_hiwater_rss(mm); mmu_notifier_invalidate_range_start(mm, start, end); - for ( ; vma && vma->vm_start < end; vma = vma->vm_next) { + for ( ; vma && vma->vm_start < end; vma = vma->vm_next) unmap_single_vma(&tlb, vma, start, end, NULL); - - /* - * zap_page_range does not specify whether mmap_sem should be - * held for read or write. That allows parallel zap_page_range - * operations to unmap a PTE and defer a flush meaning that - * this call observes pte_none and fails to flush the TLB. - * Rather than adding a complex API, ensure that no stale - * TLB entries exist when this call returns. - */ - flush_tlb_range(vma, start, end); - } - mmu_notifier_invalidate_range_end(mm, start, end); tlb_finish_mmu(&tlb, start, end); } -- 2.17.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl0-f66.google.com ([209.85.160.66]:34235 "EHLO mail-pl0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932416AbeFLHQn (ORCPT ); Tue, 12 Jun 2018 03:16:43 -0400 Received: by mail-pl0-f66.google.com with SMTP id g20-v6so13849777plq.1 for ; Tue, 12 Jun 2018 00:16:43 -0700 (PDT) From: Nicholas Piggin Subject: [RFC PATCH 1/3] Revert "mm: always flush VMA ranges affected by zap_page_range" Date: Tue, 12 Jun 2018 17:16:19 +1000 Message-ID: <20180612071621.26775-2-npiggin@gmail.com> In-Reply-To: <20180612071621.26775-1-npiggin@gmail.com> References: <20180612071621.26775-1-npiggin@gmail.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-mm@kvack.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" , Minchan Kim , Mel Gorman , Nadav Amit , Andrew Morton , Linus Torvalds Message-ID: <20180612071619.6ALFdaVOqX_ADKYMnFdTirfKKbCD5pipOG8uqRuvle8@z> This reverts commit 4647706ebeee6e50f7b9f922b095f4ec94d581c3. Patch 99baac21e4585 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") provides a superset of the TLB flush coverage of this commit, and even includes in the changelog "this patch supersedes 'mm: Always flush VMA ranges affected by zap_page_range v2'". Reverting this avoids double flushing the TLB range, and the less efficient flush_tlb_range() call (the mmu_gather API is more precise about what ranges it invalidates). Signed-off-by: Nicholas Piggin --- mm/memory.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7206a634270b..9d472e00fc2d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1603,20 +1603,8 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, mm, start, end); update_hiwater_rss(mm); mmu_notifier_invalidate_range_start(mm, start, end); - for ( ; vma && vma->vm_start < end; vma = vma->vm_next) { + for ( ; vma && vma->vm_start < end; vma = vma->vm_next) unmap_single_vma(&tlb, vma, start, end, NULL); - - /* - * zap_page_range does not specify whether mmap_sem should be - * held for read or write. That allows parallel zap_page_range - * operations to unmap a PTE and defer a flush meaning that - * this call observes pte_none and fails to flush the TLB. - * Rather than adding a complex API, ensure that no stale - * TLB entries exist when this call returns. - */ - flush_tlb_range(vma, start, end); - } - mmu_notifier_invalidate_range_end(mm, start, end); tlb_finish_mmu(&tlb, start, end); } -- 2.17.0