From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60FA0C43217 for ; Tue, 1 Nov 2022 19:16:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbiKATQe (ORCPT ); Tue, 1 Nov 2022 15:16:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230389AbiKATQc (ORCPT ); Tue, 1 Nov 2022 15:16:32 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4ECB213CFD; Tue, 1 Nov 2022 12:16:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F3963B81F43; Tue, 1 Nov 2022 19:16:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90E75C433D6; Tue, 1 Nov 2022 19:16:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1667330188; bh=NLVGWAt9nNzjfUfMgL/utIUpqDOnioir+qN7auMeHf4=; h=Date:To:From:Subject:From; b=2JfQZOJoPbqiesuLvZKPSG19XvbxOxR+3qzKOytnR/g/i98twQWpE80wSy5cUnTE+ NSviWUcfnWx3pL4gV02jb646Z4YU2vWdyuUSeLyBkvT5HkzdCfoB4c5IhcQHZsaG1A 4AMZLpu0fXyQwjQDXuY88RQWLsw2K3n4UFfB8ejw= Date: Tue, 01 Nov 2022 12:16:27 -0700 To: mm-commits@vger.kernel.org, willy@infradead.org, vbabka@suse.cz, stable@vger.kernel.org, riel@surriel.com, peterx@redhat.com, naoya.horiguchi@linux.dev, nadav.amit@gmail.com, harperchen1110@gmail.com, david@redhat.com, axelrasmussen@google.com, almasrymina@google.com, mike.kravetz@oracle.com, akpm@linux-foundation.org From: Andrew Morton Subject: [to-be-updated] hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing.patch removed from -mm tree Message-Id: <20221101191628.90E75C433D6@smtp.kernel.org> Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The quilt patch titled Subject: hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing has been removed from the -mm tree. Its filename was hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Mike Kravetz Subject: hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing Date: Mon, 31 Oct 2022 15:34:40 -0700 madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the page tables associated with the address range. For hugetlb vmas, zap_page_range will call __unmap_hugepage_range_final. However, __unmap_hugepage_range_final assumes the passed vma is about to be removed and deletes the vma_lock to prevent pmd sharing as the vma is on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, but the missing vma_lock prevents pmd sharing and could potentially lead to issues with truncation/fault races. This issue was originally reported here [1] as a BUG triggered in page_try_dup_anon_rmap. Prior to the introduction of the hugetlb vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to prevent pmd sharing. Subsequent faults on this vma were confused as VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was not set in new pages added to the page table. This resulted in pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. Create a new routine clear_hugetlb_page_range() that can be called from madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as zap_page_range, but does not delete the vma_lock. Also, add a new zap flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This is used to indicate the 'final' unmapping of a vma. The routine __unmap_hugepage_range to take a notification_needed argument. This is used to prevent duplicate notifications. [1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/ Link: https://lkml.kernel.org/r/20221031223440.285187-1-mike.kravetz@oracle.com Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") Signed-off-by: Mike Kravetz Reported-by: Wei Chen Cc: Axel Rasmussen Cc: David Hildenbrand Cc: Matthew Wilcox (Oracle) Cc: Mina Almasry Cc: Nadav Amit Cc: Naoya Horiguchi Cc: Peter Xu Cc: Rik van Riel Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton --- include/linux/hugetlb.h | 7 +++ include/linux/mm.h | 3 + mm/hugetlb.c | 80 ++++++++++++++++++++++++++++---------- mm/memory.c | 18 +++++--- 4 files changed, 82 insertions(+), 26 deletions(-) --- a/include/linux/hugetlb.h~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing +++ a/include/linux/hugetlb.h @@ -156,6 +156,8 @@ long follow_hugetlb_page(struct mm_struc void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); +void clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end); void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, @@ -460,6 +462,11 @@ static inline void __unmap_hugepage_rang BUG(); } +static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ +} + static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags) --- a/include/linux/mm.h~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing +++ a/include/linux/mm.h @@ -3475,4 +3475,7 @@ madvise_set_anon_name(struct mm_struct * */ #define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0)) +/* Set in unmap_vmas() to indicate an unmap call. Only used by hugetlb */ +#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1)) + #endif /* _LINUX_MM_H */ --- a/mm/hugetlb.c~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing +++ a/mm/hugetlb.c @@ -5064,7 +5064,6 @@ static void __unmap_hugepage_range(struc struct page *page; struct hstate *h = hstate_vma(vma); unsigned long sz = huge_page_size(h); - struct mmu_notifier_range range; unsigned long last_addr_mask; bool force_flush = false; @@ -5079,13 +5078,6 @@ static void __unmap_hugepage_range(struc tlb_change_page_size(tlb, sz); tlb_start_vma(tlb, vma); - /* - * If sharing possible, alert mmu notifiers of worst case. - */ - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start, - end); - adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); - mmu_notifier_invalidate_range_start(&range); last_addr_mask = hugetlb_mask_last_page(h); address = start; for (; address < end; address += sz) { @@ -5174,7 +5166,6 @@ static void __unmap_hugepage_range(struc if (ref_page) break; } - mmu_notifier_invalidate_range_end(&range); tlb_end_vma(tlb, vma); /* @@ -5194,37 +5185,86 @@ static void __unmap_hugepage_range(struc tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) { + bool final = zap_flags & ZAP_FLAG_UNMAP; + hugetlb_vma_lock_write(vma); i_mmap_lock_write(vma->vm_file->f_mapping); __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); - /* - * Unlock and free the vma lock before releasing i_mmap_rwsem. When - * the vma_lock is freed, this makes the vma ineligible for pmd - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. - * This is important as page tables for this unmapped range will - * be asynchrously deleted. If the page tables are shared, there - * will be issues when accessed by someone else. - */ - __hugetlb_vma_unlock_write_free(vma); + if (final) { + /* + * Unlock and free the vma lock before releasing i_mmap_rwsem. + * When the vma_lock is freed, this makes the vma ineligible + * for pmd sharing. And, i_mmap_rwsem is required to set up + * pmd sharing. This is important as page tables for this + * unmapped range will be asynchrously deleted. If the page + * tables are shared, there will be issues when accessed by + * someone else. + */ + __hugetlb_vma_unlock_write_free(vma); + i_mmap_unlock_write(vma->vm_file->f_mapping); + } else { + i_mmap_unlock_write(vma->vm_file->f_mapping); + hugetlb_vma_unlock_write(vma); + } +} + +void __unmap_hugepage_range_final(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, + unsigned long end, struct page *ref_page, + zap_flags_t zap_flags) +{ + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, + zap_flags); +} + +#ifdef CONFIG_ADVISE_SYSCALLS +/* + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete + * the associated vma_lock. + */ +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + struct mmu_notifier_range range; + struct mmu_gather tlb; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + start, end); + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); + tlb_gather_mmu(&tlb, vma->vm_mm); + update_hiwater_rss(vma->vm_mm); + mmu_notifier_invalidate_range_start(&range); - i_mmap_unlock_write(vma->vm_file->f_mapping); + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0); + + mmu_notifier_invalidate_range_end(&range); + tlb_finish_mmu(&tlb); } +#endif void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) { + struct mmu_notifier_range range; struct mmu_gather tlb; + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, + start, end); + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); tlb_gather_mmu(&tlb, vma->vm_mm); + __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags); + + mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } --- a/mm/memory.c~hugetlb-dont-delete-vma_lock-in-hugetlb-madv_dontneed-processing +++ a/mm/memory.c @@ -1720,7 +1720,7 @@ void unmap_vmas(struct mmu_gather *tlb, { struct mmu_notifier_range range; struct zap_details details = { - .zap_flags = ZAP_FLAG_DROP_MARKER, + .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP, /* Careful - we need to zap private pages too! */ .even_cows = true, }; @@ -1753,15 +1753,21 @@ void zap_page_range(struct vm_area_struc MA_STATE(mas, mt, vma->vm_end, vma->vm_end); lru_add_drain(); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - start, start + size); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); - mmu_notifier_invalidate_range_start(&range); do { - unmap_single_vma(&tlb, vma, start, range.end, NULL); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, + vma->vm_mm, + max(start, vma->vm_start), + min(start + size, vma->vm_end)); + if (is_vm_hugetlb_page(vma)) + adjust_range_if_pmd_sharing_possible(vma, + &range.start, + &range.end); + mmu_notifier_invalidate_range_start(&range); + unmap_single_vma(&tlb, vma, start, start + size, NULL); + mmu_notifier_invalidate_range_end(&range); } while ((vma = mas_find(&mas, end - 1)) != NULL); - mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } _ Patches currently in -mm which might be from mike.kravetz@oracle.com are hugetlb-simplify-hugetlb-handling-in-follow_page_mask.patch hugetlb-simplify-hugetlb-handling-in-follow_page_mask-v4.patch hugetlb-simplify-hugetlb-handling-in-follow_page_mask-v5.patch