From: Vlastimil Babka <vbabka@suse.cz>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Jason Gunthorpe <jgg@ziepe.ca>,
John Hubbard <jhubbard@nvidia.com>, Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v1] mm/gup: remove (VM_)BUG_ONs
Date: Wed, 4 Jun 2025 16:22:26 +0200 [thread overview]
Message-ID: <202d338d-30f6-4f3b-bddc-b0818a940732@suse.cz> (raw)
In-Reply-To: <20250604140544.688711-1-david@redhat.com>
On 6/4/25 16:05, David Hildenbrand wrote:
> Especially once we hit one of the assertions in
> sanity_check_pinned_pages(), observing follow-up assertions failing
> in other code can give good clues about what went wrong, so use
> VM_WARN_ON_ONCE instead.
>
> While at it, let's just convert all VM_BUG_ON to VM_WARN_ON_ONCE as
> well. Add one comment for the pfn_valid() check.
>
> We have to introduce VM_WARN_ON_ONCE_VMA() to make that fly.
>
> Drop the BUG_ON after mmap_read_lock_killable(), if that ever returns
> something > 0 we're in bigger trouble. Convert the other BUG_ON's into
> VM_WARN_ON_ONCE as well, they are in a similar domain "should never
> happen", but more reasonable to check for during early testing.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Suren Baghdasaryan <surenb@google.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Peter Xu <peterx@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Makes sense, BUG_ONs bad.
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>
> Wanted to do this for a long time, but my todo list keeps growing ...
>
> Based on mm/mm-unstable
>
> ---
> include/linux/mmdebug.h | 12 ++++++++++++
> mm/gup.c | 41 +++++++++++++++++++----------------------
> 2 files changed, 31 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
> index a0a3894900ed4..14a45979cccc9 100644
> --- a/include/linux/mmdebug.h
> +++ b/include/linux/mmdebug.h
> @@ -89,6 +89,17 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
> } \
> unlikely(__ret_warn_once); \
> })
> +#define VM_WARN_ON_ONCE_VMA(cond, vma) ({ \
> + static bool __section(".data..once") __warned; \
> + int __ret_warn_once = !!(cond); \
> + \
> + if (unlikely(__ret_warn_once && !__warned)) { \
> + dump_vma(vma); \
> + __warned = true; \
> + WARN_ON(1); \
> + } \
> + unlikely(__ret_warn_once); \
> +})
> #define VM_WARN_ON_VMG(cond, vmg) ({ \
> int __ret_warn = !!(cond); \
> \
> @@ -115,6 +126,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
> #define VM_WARN_ON_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
> #define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
> #define VM_WARN_ON_ONCE_MM(cond, mm) BUILD_BUG_ON_INVALID(cond)
> +#define VM_WARN_ON_ONCE_VMA(cond, vma) BUILD_BUG_ON_INVALID(cond)
> #define VM_WARN_ON_VMG(cond, vmg) BUILD_BUG_ON_INVALID(cond)
> #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
> #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond)
> diff --git a/mm/gup.c b/mm/gup.c
> index e065a49842a87..3c3931fcdd820 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -64,11 +64,11 @@ static inline void sanity_check_pinned_pages(struct page **pages,
> !folio_test_anon(folio))
> continue;
> if (!folio_test_large(folio) || folio_test_hugetlb(folio))
> - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page), page);
> + VM_WARN_ON_ONCE_PAGE(!PageAnonExclusive(&folio->page), page);
> else
> /* Either a PTE-mapped or a PMD-mapped THP. */
> - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page) &&
> - !PageAnonExclusive(page), page);
> + VM_WARN_ON_ONCE_PAGE(!PageAnonExclusive(&folio->page) &&
> + !PageAnonExclusive(page), page);
> }
> }
>
> @@ -760,8 +760,8 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
> if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
> return ERR_PTR(-EMLINK);
>
> - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
> - !PageAnonExclusive(page), page);
> + VM_WARN_ON_ONCE_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
> + !PageAnonExclusive(page), page);
>
> ret = try_grab_folio(page_folio(page), 1, flags);
> if (ret)
> @@ -899,8 +899,8 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
> goto out;
> }
>
> - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
> - !PageAnonExclusive(page), page);
> + VM_WARN_ON_ONCE_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
> + !PageAnonExclusive(page), page);
>
> /* try_grab_folio() does nothing unless FOLL_GET or FOLL_PIN is set. */
> ret = try_grab_folio(folio, 1, flags);
> @@ -1180,7 +1180,7 @@ static int faultin_page(struct vm_area_struct *vma,
> if (unshare) {
> fault_flags |= FAULT_FLAG_UNSHARE;
> /* FAULT_FLAG_WRITE and FAULT_FLAG_UNSHARE are incompatible */
> - VM_BUG_ON(fault_flags & FAULT_FLAG_WRITE);
> + VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_WRITE);
> }
>
> ret = handle_mm_fault(vma, address, fault_flags, NULL);
> @@ -1760,10 +1760,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
> }
>
> /* VM_FAULT_RETRY or VM_FAULT_COMPLETED cannot return errors */
> - if (!*locked) {
> - BUG_ON(ret < 0);
> - BUG_ON(ret >= nr_pages);
> - }
> + VM_WARN_ON_ONCE(!*locked && (ret < 0 || ret >= nr_pages));
>
> if (ret > 0) {
> nr_pages -= ret;
> @@ -1808,7 +1805,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
>
> ret = mmap_read_lock_killable(mm);
> if (ret) {
> - BUG_ON(ret > 0);
> if (!pages_done)
> pages_done = ret;
> break;
> @@ -1819,11 +1815,11 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
> pages, locked);
> if (!*locked) {
> /* Continue to retry until we succeeded */
> - BUG_ON(ret != 0);
> + VM_WARN_ON_ONCE(ret != 0);
> goto retry;
> }
> if (ret != 1) {
> - BUG_ON(ret > 1);
> + VM_WARN_ON_ONCE(ret > 1);
> if (!pages_done)
> pages_done = ret;
> break;
> @@ -1885,10 +1881,10 @@ long populate_vma_page_range(struct vm_area_struct *vma,
> int gup_flags;
> long ret;
>
> - VM_BUG_ON(!PAGE_ALIGNED(start));
> - VM_BUG_ON(!PAGE_ALIGNED(end));
> - VM_BUG_ON_VMA(start < vma->vm_start, vma);
> - VM_BUG_ON_VMA(end > vma->vm_end, vma);
> + VM_WARN_ON_ONCE(!PAGE_ALIGNED(start));
> + VM_WARN_ON_ONCE(!PAGE_ALIGNED(end));
> + VM_WARN_ON_ONCE_VMA(start < vma->vm_start, vma);
> + VM_WARN_ON_ONCE_VMA(end > vma->vm_end, vma);
> mmap_assert_locked(mm);
>
> /*
> @@ -1957,8 +1953,8 @@ long faultin_page_range(struct mm_struct *mm, unsigned long start,
> int gup_flags;
> long ret;
>
> - VM_BUG_ON(!PAGE_ALIGNED(start));
> - VM_BUG_ON(!PAGE_ALIGNED(end));
> + VM_WARN_ON_ONCE(!PAGE_ALIGNED(start));
> + VM_WARN_ON_ONCE(!PAGE_ALIGNED(end));
> mmap_assert_locked(mm);
>
> /*
> @@ -2908,7 +2904,8 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> } else if (pte_special(pte))
> goto pte_unmap;
>
> - VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
> + /* If it's not marked as special it must have a valid memmap. */
> + VM_WARN_ON_ONCE(!pfn_valid(pte_pfn(pte)));
> page = pte_page(pte);
>
> folio = try_grab_folio_fast(page, 1, flags);
>
> base-commit: 2d0c297637e7d59771c1533847c666cdddc19884
next prev parent reply other threads:[~2025-06-04 14:22 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-04 14:05 [PATCH v1] mm/gup: remove (VM_)BUG_ONs David Hildenbrand
2025-06-04 14:22 ` Vlastimil Babka [this message]
2025-06-04 14:26 ` Suren Baghdasaryan
2025-06-04 14:48 ` Lorenzo Stoakes
2025-06-04 14:58 ` David Hildenbrand
2025-06-04 15:44 ` Lorenzo Stoakes
2025-06-04 15:42 ` Linus Torvalds
2025-06-04 16:05 ` Lorenzo Stoakes
2025-06-04 15:14 ` Jason Gunthorpe
2025-06-04 16:01 ` David Hildenbrand
2025-06-04 17:25 ` SeongJae Park
2025-06-04 19:12 ` Liam R. Howlett
2025-06-04 19:16 ` David Hildenbrand
2025-06-05 1:07 ` John Hubbard
2025-06-05 5:37 ` Vlastimil Babka
2025-06-05 6:08 ` David Hildenbrand
2025-06-05 8:48 ` Vlastimil Babka
2025-06-05 12:29 ` David Hildenbrand
2025-06-05 7:10 ` Michal Hocko
2025-06-06 8:10 ` David Hildenbrand
2025-06-06 8:31 ` Michal Hocko
2025-06-06 9:01 ` David Hildenbrand
2025-06-06 10:13 ` Michal Hocko
2025-06-06 10:19 ` Lorenzo Stoakes
2025-06-06 10:28 ` David Hildenbrand
2025-06-06 11:04 ` Lorenzo Stoakes
2025-06-06 11:44 ` David Hildenbrand
2025-06-06 11:56 ` Michal Hocko
2025-06-06 12:12 ` Lorenzo Stoakes
2025-06-06 12:17 ` David Hildenbrand
2025-06-06 17:57 ` John Hubbard
2025-06-06 18:06 ` Lorenzo Stoakes
2025-06-06 18:15 ` David Hildenbrand
2025-06-06 18:21 ` John Hubbard
2025-06-06 18:23 ` David Hildenbrand
2025-06-06 18:31 ` John Hubbard
2025-06-06 18:36 ` David Hildenbrand
2025-06-06 18:39 ` John Hubbard
2025-06-06 18:34 ` Lorenzo Stoakes
2025-06-06 18:42 ` Jason Gunthorpe
2025-06-06 18:46 ` Lorenzo Stoakes
2025-06-06 19:03 ` Lorenzo Stoakes
2025-06-07 13:42 ` Jason Gunthorpe
2025-06-07 13:53 ` Lorenzo Stoakes
2025-06-07 18:00 ` John Hubbard
2025-06-09 9:57 ` Vlastimil Babka
2025-07-24 10:54 ` Vlastimil Babka
2025-07-24 10:56 ` Lorenzo Stoakes
2025-07-24 17:27 ` John Hubbard
2025-06-11 9:32 ` David Hildenbrand
2025-06-11 12:03 ` Jason Gunthorpe
2025-06-11 12:06 ` Lorenzo Stoakes
2025-06-06 10:28 ` Michal Hocko
2025-06-06 10:27 ` David Hildenbrand
2025-06-06 8:12 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202d338d-30f6-4f3b-bddc-b0818a940732@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=peterx@redhat.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).