From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB0ADC5B549 for ; Wed, 4 Jun 2025 14:26:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 472F46B04F4; Wed, 4 Jun 2025 10:26:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44B506B04F8; Wed, 4 Jun 2025 10:26:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3602C6B04FB; Wed, 4 Jun 2025 10:26:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 167FC6B04F4 for ; Wed, 4 Jun 2025 10:26:58 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B1247817AB for ; Wed, 4 Jun 2025 14:26:55 +0000 (UTC) X-FDA: 83517944790.23.A7F0925 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by imf19.hostedemail.com (Postfix) with ESMTP id BCEFC1A000D for ; Wed, 4 Jun 2025 14:26:53 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=I9nGBC++; spf=pass (imf19.hostedemail.com: domain of surenb@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749047213; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IAp/1HGbBLv6qtx/n8Ypumuz56YxupYXz6XcmTVLY0A=; b=xPHXMkLeL043opQE+fkJslC3qqoT4Nv8F6mAQycCtU0/iq7s6kGJ0a8Hq0KEbquCIShs4H wHlQ5//Db4ZsZZ0wWH4qmueI2pDvLUKh+Ta0X9+VqWqCOqfs/9yep6HKJds80nxcDNoxr0 GgjnwyrfXg9ZeNq4yX0PlFv6FWqJ5t4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=I9nGBC++; spf=pass (imf19.hostedemail.com: domain of surenb@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749047213; a=rsa-sha256; cv=none; b=3lnLSRemO5DdkIYMWO7mlt4EkYOvz+dPk6OGA130b5qRkqTXpHtT6rFLJ2mvw4W3LyqzhI KIy/+jz150OX4+4IGKmX87IBGPiPfDL0Vw1oH1lmXzgeYRD3YjkU9QnKrVI85kSKP7YLw6 85jCgVolinDxE96RjZRWvcXxaaK6Z0Y= Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-47e9fea29easo449881cf.1 for ; Wed, 04 Jun 2025 07:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749047213; x=1749652013; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=IAp/1HGbBLv6qtx/n8Ypumuz56YxupYXz6XcmTVLY0A=; b=I9nGBC++1F2Cq4+Zv/kcVEw+/BS96fDReItoKLarIZvfphkGM8SbKVCpSSjyNsHEAe XOs5eO5mXjsTex/Fvmb0kxgL3uS297eXuF9lwms3ePMsGa+Ik/4amsd/MgM8R5JYS/Pn BdIqeRKoQdJqkVyqPNyOXYuMPnqjldyeUyH5TZWkpZ84ksqiyv6n2uRaZ85/0dB6XDGE Z1r4ySk00U7L5Qz9HOY0XMVEpOQcM94Ul9nW2sIehehrvV9oPUjH3Fj7r9bcO7dMi4PH 3swniApM+xYBd0he+NyjW9kitbRP4OzNai+Xh/SeY+8GrVEhX36rmuwgrGvQ9VmQoJcz XJDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749047213; x=1749652013; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IAp/1HGbBLv6qtx/n8Ypumuz56YxupYXz6XcmTVLY0A=; b=a9RMC25eW53jdjVYjPU8lrXREEd+TLkt+iES0rFEF32nTwi0iJlRiiWAj5UXSBx2fH qN9NrnPUAMKcVHqNMnZT+EEfLi1eGFIx+Ne3hLk98i86MADEsh3Gm1S/A5lMhhlYlJ9p +nwjgxrpuiqV9tDzfATOKyQLpXh/K+KRRJMjhB3e+W7l6m7jEnnhczvYrBkygjRSTna1 NJqGrkxaOqhyWzCV0WNbCa0QCX8EatYpjt/v0bB5Uz8K+GpHmYWfg5fFN6RvZ5CkCMH1 28gDi6DScsi+09rlJuZy/y2jnR2cgmxkC9ilDN7GE7LpJMHD70kGVb0eP5fVuGt/cipS bwxg== X-Forwarded-Encrypted: i=1; AJvYcCV86B6pDY1BTyZbMwGtYYNL6gp5IoYrbqMLpADMhYIbBYEIjEGkEphEql80rfEr3ieS6GqjaejKFg==@kvack.org X-Gm-Message-State: AOJu0YwXsUVFLsyciLFcw/dykzxYzwyWm1ljSAV/dsgu1GOrijwmPfYs 8Fjce/NPqCUpXl2+RxHNv6UiVsFGP9pupia6UpI29fzgiK7SUgm08q0HBYzA0Z7ODoQoPjzD43X LbjVcwpeIWthKZxfEsTkbyRaBrhe1tGBR11racDNz X-Gm-Gg: ASbGnctTg2GZexKRCgoGH56E8JYjjieEGdyyNdOa8psSGvdfJh/VqwT/zRZsHcIxipg pD6sP+xd4CcKIJm6lawe15Imjg73E/Wl1KvJiF/EBxN2/CLwpJx3/chAZjbt3nDMrjuLpPDOAW8 iP3zt6LqGQOlfs3bUVIZjROQAnq5KvBv8XKBWsYlSW9tjZ2EzMNDcJUZUqfr9Gx6C1+aeVoX+f X-Google-Smtp-Source: AGHT+IFzdOA+3P2PyHFB0pEDLQ0XnMtUkTcxsnX3H6Bkc6gXrGQls44k6HcVZWeCRhjdatLqR5+P87mlE7vGBQH6TYo= X-Received: by 2002:ac8:650e:0:b0:4a5:a8b7:6c12 with SMTP id d75a77b69052e-4a5a8b770d4mr1977241cf.26.1749047212170; Wed, 04 Jun 2025 07:26:52 -0700 (PDT) MIME-Version: 1.0 References: <20250604140544.688711-1-david@redhat.com> <202d338d-30f6-4f3b-bddc-b0818a940732@suse.cz> In-Reply-To: <202d338d-30f6-4f3b-bddc-b0818a940732@suse.cz> From: Suren Baghdasaryan Date: Wed, 4 Jun 2025 07:26:40 -0700 X-Gm-Features: AX0GCFvHLF0qLYBnBbkp2Yp92Lpgzy-k84kOFcNhayBjADpHAViy2MHD967dpb8 Message-ID: Subject: Re: [PATCH v1] mm/gup: remove (VM_)BUG_ONs To: Vlastimil Babka Cc: David Hildenbrand , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Michal Hocko , Jason Gunthorpe , John Hubbard , Peter Xu Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: BCEFC1A000D X-Stat-Signature: r3jbb1pc7rwiu7wuuqxxq3tmw8uxnia7 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1749047213-356292 X-HE-Meta: U2FsdGVkX187mmSKWZJqB6oSfnpuhwcwA8Qpd4N95tp9fAaL2scvivGUZXx/6EZH34bbASOtiNbkbUcHSqMdtWQkRHzNdXt2A696h2zMa0H9MRpBUBbV2FoX5CTm7pymCR9fPiEGZsquh77xrSTSWZLrVBjlVIj1EAuyKOu+ChisPccyNh9raiSXOi2Ldf5GJomqRi7cPUgG7OyzVYxcrKhFmavnF9G2w1B7o5TiSTF1SFeIxcUO+2auZH2/zUUytS1LQ6kFRrGwU75yplZq/X2vvaDtpeqsQPU6fiIPilvzftNuy/9A5f+vX9L3a78jJxeio40xHs7ln5s24dkSyBZ+4EgLZeBYXpriE5ZkCYkbyjnQsKjPe6jkg6b7cekc/3BSyPT4RjKRuspOFPMMOlzOq64zNV4rl3JbG1IsTIqzgB3dVJAI8ypF1/I6A55uZYbPKFc3IlSWBOLKide/6fIgglCMT9zbu4lPNzrg9Z30r/bihT7kpxjjcLbSbHDs9z42X107sE66OAl9PVnAePdFvzcoolaBfT4qNhNVw1y+6ttISpYTOQF+mVTcFndD5cKWOToNFCIfTN4aaMouMuPPeW+TwAWXF4H2IY8LtPSCOWXRjXXmmiHkhhbKe0A7608piJsrUAd3tXk2xgEgl3lD2tY26GByROdmeI4wihB177mO0GfBd2mfFrAhesW4i/txDy3q0WNYBjVZ9dlAmqISK9NwLxobQhXbTRhmg/qugY90j2p9XHK6IJwgepEYbqGpuO4y0JGrhREjicoVQTzZi3L5kShgZnJd9MDFOA+j5V14H+ih2XpOi8FNPYGm4ru4t+Cr8s2JK6tWMjm0M3b2fQR02MJCVTpY0x4IrC5a90t1nF1o+z9dse6XjRq84VV1umyTGuVqyUjvfJ1WZuMrvxHPP6NZPw0e89bsH1QjJeWAyVOaZG1z57vmG0k/WRRuc9u10fYEjWeZ29y QR/5mrBy xyBLs3xFyX+nP8a4cGOY8hBwSH8cewubY1jqE+hum5veje6vikxBUzqXEf1vorBc6mPzOjHARHTn4BK5niw7TH0bfk5qn3XPnAE17no2JHC2HMjP5Ga10vjUP7/u958g2sCWObA+5YfjFZxhzNmpkl8Z7I46e05+t5hsA4M08UN0JHENW0704i7FtfWVInCps7ah1dH9WH9koUv6roZVLRI5wm8eF2742hPYLw2cJMXZFwd09rdub+ULR0jS2613uqc8OwJUtJyxBdrsL1hOkAUi5WWEDVvxugopSJ5j2VdxzFrHFSDJFPB1LQrKDX9VoiRA17quqBGx7SEDM5WPyB00WEj+YV3QUA9CojW2hQK69UEz2HBZ8CLhVIMZRSNmJhzhXaLqm63De8cUm7mpwdYO1nAIWuxdTV460xN8k19PfYJDzkdfvhM9TND4dZJxLlZBb/qeVktUmHR04Ps5CSdtRx7x0Herwr/EVKQUM25YPB3Kfct2uQQ1jkbLSqQjvcMvWJyyCjcyazPtC+fEDZL1+jBAUM4MAspX6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 4, 2025 at 7:22=E2=80=AFAM Vlastimil Babka wro= te: > > On 6/4/25 16:05, David Hildenbrand wrote: > > Especially once we hit one of the assertions in > > sanity_check_pinned_pages(), observing follow-up assertions failing > > in other code can give good clues about what went wrong, so use > > VM_WARN_ON_ONCE instead. > > > > While at it, let's just convert all VM_BUG_ON to VM_WARN_ON_ONCE as > > well. Add one comment for the pfn_valid() check. > > > > We have to introduce VM_WARN_ON_ONCE_VMA() to make that fly. > > > > Drop the BUG_ON after mmap_read_lock_killable(), if that ever returns > > something > 0 we're in bigger trouble. Convert the other BUG_ON's into > > VM_WARN_ON_ONCE as well, they are in a similar domain "should never > > happen", but more reasonable to check for during early testing. > > > > Cc: Andrew Morton > > Cc: Lorenzo Stoakes > > Cc: "Liam R. Howlett" > > Cc: Vlastimil Babka > > Cc: Mike Rapoport > > Cc: Suren Baghdasaryan > > Cc: Michal Hocko > > Cc: Jason Gunthorpe > > Cc: John Hubbard > > Cc: Peter Xu > > Signed-off-by: David Hildenbrand > > Makes sense, BUG_ONs bad. > > Acked-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan > > > --- > > > > Wanted to do this for a long time, but my todo list keeps growing ... > > > > Based on mm/mm-unstable > > > > --- > > include/linux/mmdebug.h | 12 ++++++++++++ > > mm/gup.c | 41 +++++++++++++++++++---------------------- > > 2 files changed, 31 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h > > index a0a3894900ed4..14a45979cccc9 100644 > > --- a/include/linux/mmdebug.h > > +++ b/include/linux/mmdebug.h > > @@ -89,6 +89,17 @@ void vma_iter_dump_tree(const struct vma_iterator *v= mi); > > } \ > > unlikely(__ret_warn_once); \ > > }) > > +#define VM_WARN_ON_ONCE_VMA(cond, vma) ({ = \ > > + static bool __section(".data..once") __warned; \ > > + int __ret_warn_once =3D !!(cond); = \ > > + \ > > + if (unlikely(__ret_warn_once && !__warned)) { \ > > + dump_vma(vma); \ > > + __warned =3D true; = \ > > + WARN_ON(1); \ > > + } \ > > + unlikely(__ret_warn_once); \ > > +}) > > #define VM_WARN_ON_VMG(cond, vmg) ({ \ > > int __ret_warn =3D !!(cond); = \ > > \ > > @@ -115,6 +126,7 @@ void vma_iter_dump_tree(const struct vma_iterator *= vmi); > > #define VM_WARN_ON_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond) > > #define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond) > > #define VM_WARN_ON_ONCE_MM(cond, mm) BUILD_BUG_ON_INVALID(cond) > > +#define VM_WARN_ON_ONCE_VMA(cond, vma) BUILD_BUG_ON_INVALID(cond) > > #define VM_WARN_ON_VMG(cond, vmg) BUILD_BUG_ON_INVALID(cond) > > #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) > > #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) > > diff --git a/mm/gup.c b/mm/gup.c > > index e065a49842a87..3c3931fcdd820 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -64,11 +64,11 @@ static inline void sanity_check_pinned_pages(struct= page **pages, > > !folio_test_anon(folio)) > > continue; > > if (!folio_test_large(folio) || folio_test_hugetlb(folio)= ) > > - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page), = page); > > + VM_WARN_ON_ONCE_PAGE(!PageAnonExclusive(&folio->p= age), page); > > else > > /* Either a PTE-mapped or a PMD-mapped THP. */ > > - VM_BUG_ON_PAGE(!PageAnonExclusive(&folio->page) &= & > > - !PageAnonExclusive(page), page); > > + VM_WARN_ON_ONCE_PAGE(!PageAnonExclusive(&folio->p= age) && > > + !PageAnonExclusive(page), pa= ge); > > } > > } > > > > @@ -760,8 +760,8 @@ static struct page *follow_huge_pmd(struct vm_area_= struct *vma, > > if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) > > return ERR_PTR(-EMLINK); > > > > - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && > > - !PageAnonExclusive(page), page); > > + VM_WARN_ON_ONCE_PAGE((flags & FOLL_PIN) && PageAnon(page) && > > + !PageAnonExclusive(page), page); > > > > ret =3D try_grab_folio(page_folio(page), 1, flags); > > if (ret) > > @@ -899,8 +899,8 @@ static struct page *follow_page_pte(struct vm_area_= struct *vma, > > goto out; > > } > > > > - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && > > - !PageAnonExclusive(page), page); > > + VM_WARN_ON_ONCE_PAGE((flags & FOLL_PIN) && PageAnon(page) && > > + !PageAnonExclusive(page), page); > > > > /* try_grab_folio() does nothing unless FOLL_GET or FOLL_PIN is s= et. */ > > ret =3D try_grab_folio(folio, 1, flags); > > @@ -1180,7 +1180,7 @@ static int faultin_page(struct vm_area_struct *vm= a, > > if (unshare) { > > fault_flags |=3D FAULT_FLAG_UNSHARE; > > /* FAULT_FLAG_WRITE and FAULT_FLAG_UNSHARE are incompatib= le */ > > - VM_BUG_ON(fault_flags & FAULT_FLAG_WRITE); > > + VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_WRITE); > > } > > > > ret =3D handle_mm_fault(vma, address, fault_flags, NULL); > > @@ -1760,10 +1760,7 @@ static __always_inline long __get_user_pages_loc= ked(struct mm_struct *mm, > > } > > > > /* VM_FAULT_RETRY or VM_FAULT_COMPLETED cannot return err= ors */ > > - if (!*locked) { > > - BUG_ON(ret < 0); > > - BUG_ON(ret >=3D nr_pages); > > - } > > + VM_WARN_ON_ONCE(!*locked && (ret < 0 || ret >=3D nr_pages= )); > > > > if (ret > 0) { > > nr_pages -=3D ret; > > @@ -1808,7 +1805,6 @@ static __always_inline long __get_user_pages_lock= ed(struct mm_struct *mm, > > > > ret =3D mmap_read_lock_killable(mm); > > if (ret) { > > - BUG_ON(ret > 0); > > if (!pages_done) > > pages_done =3D ret; > > break; > > @@ -1819,11 +1815,11 @@ static __always_inline long __get_user_pages_lo= cked(struct mm_struct *mm, > > pages, locked); > > if (!*locked) { > > /* Continue to retry until we succeeded */ > > - BUG_ON(ret !=3D 0); > > + VM_WARN_ON_ONCE(ret !=3D 0); > > goto retry; > > } > > if (ret !=3D 1) { > > - BUG_ON(ret > 1); > > + VM_WARN_ON_ONCE(ret > 1); > > if (!pages_done) > > pages_done =3D ret; > > break; > > @@ -1885,10 +1881,10 @@ long populate_vma_page_range(struct vm_area_str= uct *vma, > > int gup_flags; > > long ret; > > > > - VM_BUG_ON(!PAGE_ALIGNED(start)); > > - VM_BUG_ON(!PAGE_ALIGNED(end)); > > - VM_BUG_ON_VMA(start < vma->vm_start, vma); > > - VM_BUG_ON_VMA(end > vma->vm_end, vma); > > + VM_WARN_ON_ONCE(!PAGE_ALIGNED(start)); > > + VM_WARN_ON_ONCE(!PAGE_ALIGNED(end)); > > + VM_WARN_ON_ONCE_VMA(start < vma->vm_start, vma); > > + VM_WARN_ON_ONCE_VMA(end > vma->vm_end, vma); > > mmap_assert_locked(mm); > > > > /* > > @@ -1957,8 +1953,8 @@ long faultin_page_range(struct mm_struct *mm, uns= igned long start, > > int gup_flags; > > long ret; > > > > - VM_BUG_ON(!PAGE_ALIGNED(start)); > > - VM_BUG_ON(!PAGE_ALIGNED(end)); > > + VM_WARN_ON_ONCE(!PAGE_ALIGNED(start)); > > + VM_WARN_ON_ONCE(!PAGE_ALIGNED(end)); > > mmap_assert_locked(mm); > > > > /* > > @@ -2908,7 +2904,8 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *p= mdp, unsigned long addr, > > } else if (pte_special(pte)) > > goto pte_unmap; > > > > - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > > + /* If it's not marked as special it must have a valid mem= map. */ > > + VM_WARN_ON_ONCE(!pfn_valid(pte_pfn(pte))); > > page =3D pte_page(pte); > > > > folio =3D try_grab_folio_fast(page, 1, flags); > > > > base-commit: 2d0c297637e7d59771c1533847c666cdddc19884 >