From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D80BEC433F5 for ; Tue, 5 Apr 2022 01:56:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F89E6B0089; Mon, 4 Apr 2022 21:49:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 880BC6B008A; Mon, 4 Apr 2022 21:49:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 721646B008C; Mon, 4 Apr 2022 21:49:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id 641DC6B0089 for ; Mon, 4 Apr 2022 21:49:31 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 25A97ACF52 for ; Tue, 5 Apr 2022 01:49:21 +0000 (UTC) X-FDA: 79321142922.18.429C1B4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 9B54410001E for ; Tue, 5 Apr 2022 01:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649123360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2hUkP0cgtQwlbYC5UnwB6jjYdUXBTnNxAup2wyXqQZs=; b=FRkKQx98m3edwDUK1nC12RRNQAQo9nzr6t6vyl5iI9ag6/IdrcS6WHGcpJgvA0k9Z2WWXS yufiEklNusZrc5iZ3PqkleE96ydq7EslAEZ5b7x/HB0q5vZrVJSdUCrDouEQbox7u8XqYO bPzZcE+oBgXB6PODMeNu5PqLoT/TmBI= Received: from mail-io1-f72.google.com (mail-io1-f72.google.com [209.85.166.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-627-YxJglIj4O_CeJe6aLU3T6Q-1; Mon, 04 Apr 2022 21:49:19 -0400 X-MC-Unique: YxJglIj4O_CeJe6aLU3T6Q-1 Received: by mail-io1-f72.google.com with SMTP id f7-20020a056602088700b00645ebbe277cso7409536ioz.22 for ; Mon, 04 Apr 2022 18:49:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2hUkP0cgtQwlbYC5UnwB6jjYdUXBTnNxAup2wyXqQZs=; b=LF0GxWKeSVe2Yj1tYc28JxulZ4y2Qsw/9y8DTHkCG9Jbeho7BMioFsZihWt8cCqwp8 jRRAwPM4nqOPofembjyfyvN3Opk3+8GUC0nbTwUIxmaCfyxgiUiLE4V6UvN9VrVaPz8y RHCHRzUwxAKz3yVVIet8PvwH17p8VnMEMh5/Bg/GqSmgH1pJmR5rhI+klytvsPK1yQsj V3QldZCvLIfctdI0dbUvmVX83BJRBnWcxjN5/wKnwQ9OgapVIVwwAkAaM86KJisbqv1T bv40MIk3R8aLz9AGp4/+V4cfzGH61KC8NSTY8Y98jjHNvx0RQt0tJyCyJkvkNVGm+Rkx pyQA== X-Gm-Message-State: AOAM533dQXjSpxhb4Lx8y54edU5TMvlUOf6KdYvYFotaE2aLBeUBxfKM bWBITN1JcCohwTPCDJgcTlGohHQ8+p7/vSck9ge96KLaaCEXWxPqj0CZ77pIFm5Soxf0QPIgEef mYwPsseMfmjA= X-Received: by 2002:a02:85ac:0:b0:323:4099:dee0 with SMTP id d41-20020a0285ac000000b003234099dee0mr636218jai.189.1649123358172; Mon, 04 Apr 2022 18:49:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx10kJiNJ151ApjRhb7SjuIvmsEDPjVdezJB8ZAXTGrM1Yb6bc+vTyyR4bDx+mEDmR1g9tcNQ== X-Received: by 2002:a02:85ac:0:b0:323:4099:dee0 with SMTP id d41-20020a0285ac000000b003234099dee0mr636201jai.189.1649123357903; Mon, 04 Apr 2022 18:49:17 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id a3-20020a5ec303000000b006496b4dd21csm7250821iok.5.2022.04.04.18.49.16 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 04 Apr 2022 18:49:17 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Mike Kravetz , Nadav Amit , Matthew Wilcox , Mike Rapoport , David Hildenbrand , Hugh Dickins , Jerome Glisse , "Kirill A . Shutemov" , Andrea Arcangeli , Andrew Morton , Axel Rasmussen , Alistair Popple , peterx@redhat.com Subject: [PATCH v8 17/23] mm/hugetlb: Only drop uffd-wp special pte if required Date: Mon, 4 Apr 2022 21:49:15 -0400 Message-Id: <20220405014915.14873-1-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220405014646.13522-1-peterx@redhat.com> References: <20220405014646.13522-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9B54410001E X-Stat-Signature: tq3yasnqxur1wymcn35jq45mq6j3yu5t Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FRkKQx98; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf14.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com X-Rspam-User: X-HE-Tag: 1649123360-461762 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As with shmem uffd-wp special ptes, only drop the uffd-wp special swap pt= e if unmapping an entire vma or synchronized such that faults can not race wit= h the unmap operation. This requires passing zap_flags all the way to the lowe= st level hugetlb unmap routine: __unmap_hugepage_range. In general, unmap calls originated in hugetlbfs code will pass the ZAP_FLAG_DROP_MARKER flag as synchronization is in place to prevent fault= s. The exception is hole punch which will first unmap without any synchroniz= ation. Later when hole punch actually removes the page from the file, it will ch= eck to see if there was a subsequent fault and if so take the hugetlb fault mute= x while unmapping again. This second unmap will pass in ZAP_FLAG_DROP_MARK= ER. The justification of "whether to apply ZAP_FLAG_DROP_MARKER flag when unm= ap a hugetlb range" is (IMHO): we should never reach a state when a page fault= could errornously fault in a page-cache page that was wr-protected to be writab= le, even in an extremely short period. That could happen if e.g. we pass ZAP_FLAG_DROP_MARKER when hugetlbfs_punch_hole() calls hugetlb_vmdelete_l= ist(), because if a page faults after that call and before remove_inode_hugepage= s() is executed, the page cache can be mapped writable again in the small racy w= indow, that can cause unexpected data overwritten. Reviewed-by: Mike Kravetz Signed-off-by: Peter Xu --- fs/hugetlbfs/inode.c | 15 +++++++++------ include/linux/hugetlb.h | 8 +++++--- mm/hugetlb.c | 33 +++++++++++++++++++++++++-------- mm/memory.c | 5 ++++- 4 files changed, 43 insertions(+), 18 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 99c7477cee5c..8b5b9df2be7d 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -404,7 +404,8 @@ static void remove_huge_page(struct page *page) } =20 static void -hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_= t end) +hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_= t end, + unsigned long zap_flags) { struct vm_area_struct *vma; =20 @@ -438,7 +439,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pg= off_t start, pgoff_t end) } =20 unmap_hugepage_range(vma, vma->vm_start + v_offset, v_end, - NULL); + NULL, zap_flags); } } =20 @@ -516,7 +517,8 @@ static void remove_inode_hugepages(struct inode *inod= e, loff_t lstart, mutex_lock(&hugetlb_fault_mutex_table[hash]); hugetlb_vmdelete_list(&mapping->i_mmap, index * pages_per_huge_page(h), - (index + 1) * pages_per_huge_page(h)); + (index + 1) * pages_per_huge_page(h), + ZAP_FLAG_DROP_MARKER); i_mmap_unlock_write(mapping); } =20 @@ -582,7 +584,8 @@ static void hugetlb_vmtruncate(struct inode *inode, l= off_t offset) i_mmap_lock_write(mapping); i_size_write(inode, offset); if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) - hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0); + hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0, + ZAP_FLAG_DROP_MARKER); i_mmap_unlock_write(mapping); remove_inode_hugepages(inode, offset, LLONG_MAX); } @@ -615,8 +618,8 @@ static long hugetlbfs_punch_hole(struct inode *inode,= loff_t offset, loff_t len) i_mmap_lock_write(mapping); if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) hugetlb_vmdelete_list(&mapping->i_mmap, - hole_start >> PAGE_SHIFT, - hole_end >> PAGE_SHIFT); + hole_start >> PAGE_SHIFT, + hole_end >> PAGE_SHIFT, 0); i_mmap_unlock_write(mapping); remove_inode_hugepages(inode, hole_start, hole_end); inode_unlock(inode); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 38c5ac28b787..ab48b3bbb0e6 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -143,11 +143,12 @@ long follow_hugetlb_page(struct mm_struct *, struct= vm_area_struct *, unsigned long *, unsigned long *, long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, - unsigned long, unsigned long, struct page *); + unsigned long, unsigned long, struct page *, + unsigned long); void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page); + struct page *ref_page, unsigned long zap_flags); void hugetlb_report_meminfo(struct seq_file *); int hugetlb_report_node_meminfo(char *buf, int len, int nid); void hugetlb_show_meminfo(void); @@ -400,7 +401,8 @@ static inline unsigned long hugetlb_change_protection= ( =20 static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct page *ref_page) + unsigned long end, struct page *ref_page, + unsigned long zap_flags) { BUG(); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 578c48ef931a..e4af8b357b90 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4947,7 +4947,7 @@ int move_hugetlb_page_tables(struct vm_area_struct = *vma, =20 static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_are= a_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page) + struct page *ref_page, unsigned long zap_flags) { struct mm_struct *mm =3D vma->vm_mm; unsigned long address; @@ -5003,7 +5003,18 @@ static void __unmap_hugepage_range(struct mmu_gath= er *tlb, struct vm_area_struct * unmapped and its refcount is dropped, so just clear pte here. */ if (unlikely(!pte_present(pte))) { - huge_pte_clear(mm, address, ptep, sz); + /* + * If the pte was wr-protected by uffd-wp in any of the + * swap forms, meanwhile the caller does not want to + * drop the uffd-wp bit in this zap, then replace the + * pte with a marker. + */ + if (pte_swp_uffd_wp_any(pte) && + !(zap_flags & ZAP_FLAG_DROP_MARKER)) + set_huge_pte_at(mm, address, ptep, + make_pte_marker(PTE_MARKER_UFFD_WP)); + else + huge_pte_clear(mm, address, ptep, sz); spin_unlock(ptl); continue; } @@ -5031,7 +5042,11 @@ static void __unmap_hugepage_range(struct mmu_gath= er *tlb, struct vm_area_struct tlb_remove_huge_tlb_entry(h, tlb, ptep, address); if (huge_pte_dirty(pte)) set_page_dirty(page); - + /* Leave a uffd-wp pte marker if needed */ + if (huge_pte_uffd_wp(pte) && + !(zap_flags & ZAP_FLAG_DROP_MARKER)) + set_huge_pte_at(mm, address, ptep, + make_pte_marker(PTE_MARKER_UFFD_WP)); hugetlb_count_sub(pages_per_huge_page(h), mm); page_remove_rmap(page, vma, true); =20 @@ -5065,9 +5080,10 @@ static void __unmap_hugepage_range(struct mmu_gath= er *tlb, struct vm_area_struct =20 void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct page *ref_page) + unsigned long end, struct page *ref_page, + unsigned long zap_flags) { - __unmap_hugepage_range(tlb, vma, start, end, ref_page); + __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); =20 /* * Clear this flag so that x86's huge_pmd_share page_table_shareable @@ -5083,12 +5099,13 @@ void __unmap_hugepage_range_final(struct mmu_gath= er *tlb, } =20 void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long star= t, - unsigned long end, struct page *ref_page) + unsigned long end, struct page *ref_page, + unsigned long zap_flags) { struct mmu_gather tlb; =20 tlb_gather_mmu(&tlb, vma->vm_mm); - __unmap_hugepage_range(&tlb, vma, start, end, ref_page); + __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags); tlb_finish_mmu(&tlb); } =20 @@ -5143,7 +5160,7 @@ static void unmap_ref_private(struct mm_struct *mm,= struct vm_area_struct *vma, */ if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER)) unmap_hugepage_range(iter_vma, address, - address + huge_page_size(h), page); + address + huge_page_size(h), page, 0); } i_mmap_unlock_write(mapping); } diff --git a/mm/memory.c b/mm/memory.c index 8ba1bb196095..9808edfe18d4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1675,8 +1675,11 @@ static void unmap_single_vma(struct mmu_gather *tl= b, * safe to do nothing in this case. */ if (vma->vm_file) { + unsigned long zap_flags =3D details ? + details->zap_flags : 0; i_mmap_lock_write(vma->vm_file->f_mapping); - __unmap_hugepage_range_final(tlb, vma, start, end, NULL); + __unmap_hugepage_range_final(tlb, vma, start, end, + NULL, zap_flags); i_mmap_unlock_write(vma->vm_file->f_mapping); } } else --=20 2.32.0