From: Barry Song <21cnbao@gmail.com>
To: Jason Wang <jasowang@redhat.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
42.hyeyoo@gmail.com, cl@linux.com, hailong.liu@oppo.com,
hch@infradead.org, iamjoonsoo.kim@lge.com, lstoakes@gmail.com,
mhocko@suse.com, penberg@kernel.org, rientjes@google.com,
roman.gushchin@linux.dev, torvalds@linux-foundation.org,
urezki@gmail.com, v-songbaohua@oppo.com, vbabka@suse.cz,
virtualization@lists.linux.dev,
"Michael S. Tsirkin" <mst@redhat.com>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Maxime Coquelin" <maxime.coquelin@redhat.com>
Subject: Re: [PATCH RFT v2 1/4] vpda: try to fix the potential crash due to misusing __GFP_NOFAIL
Date: Wed, 31 Jul 2024 12:11:41 +0800 [thread overview]
Message-ID: <CAGsJ_4z+-XdAEt+XrxGTnoB4PnimKXhg0JciiwPST-OQYit+-g@mail.gmail.com> (raw)
In-Reply-To: <CACGkMEuzYXp51h2tPk29HKhvSfgsC5WSYtGt==SVMDU-0YSPEg@mail.gmail.com>
On Wed, Jul 31, 2024 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Wed, Jul 31, 2024 at 11:15 AM Barry Song <21cnbao@gmail.com> wrote:
> >
> > On Wed, Jul 31, 2024 at 11:10 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Wed, Jul 31, 2024 at 8:03 AM Barry Song <21cnbao@gmail.com> wrote:
> > > >
> > > > From: Barry Song <v-songbaohua@oppo.com>
> > > >
> > > > mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
> > > > __GFP_NOFAIL without direct reclamation may just result in a busy
> > > > loop within non-sleepable contexts.
> > > >
> > > > static inline struct page *
> > > > __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > > > struct alloc_context *ac)
> > > > {
> > > > ...
> > > > /*
> > > > * Make sure that __GFP_NOFAIL request doesn't leak out and make sure
> > > > * we always retry
> > > > */
> > > > if (gfp_mask & __GFP_NOFAIL) {
> > > > /*
> > > > * All existing users of the __GFP_NOFAIL are blockable, so warn
> > > > * of any new users that actually require GFP_NOWAIT
> > > > */
> > > > if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask))
> > > > goto fail;
> > > > ...
> > > > }
> > > > ...
> > > > fail:
> > > > warn_alloc(gfp_mask, ac->nodemask,
> > > > "page allocation failure: order:%u", order);
> > > > got_pg:
> > > > return page;
> > > > }
> > > >
> > > > Let's move the memory allocation out of the atomic context and use
> > > > the normal sleepable context to get pages.
> > > >
> > > > [RFT]: This has only been compile-tested; I'd prefer if the VDPA maintainers
> > > > handles it.
> > > >
> > > > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > > > Cc: Jason Wang <jasowang@redhat.com>
> > > > Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > Cc: "Eugenio Pérez" <eperezma@redhat.com>
> > > > Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> > > > Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> > > > ---
> > > > drivers/vdpa/vdpa_user/iova_domain.c | 31 +++++++++++++++++++++++-----
> > > > drivers/vdpa/vdpa_user/iova_domain.h | 5 ++++-
> > > > drivers/vdpa/vdpa_user/vduse_dev.c | 4 +++-
> > > > 3 files changed, 33 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > index 791d38d6284c..9318f059a8b5 100644
> > > > --- a/drivers/vdpa/vdpa_user/iova_domain.c
> > > > +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > @@ -283,7 +283,23 @@ int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
> > > > return ret;
> > > > }
> > > >
> > > > -void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
> > > > +struct page **vduse_domain_alloc_pages_to_remove_bounce(struct vduse_iova_domain *domain)
> > > > +{
> > > > + struct page **pages;
> > > > + unsigned long count, i;
> > > > +
> > > > + if (!domain->user_bounce_pages)
> > > > + return NULL;
> > > > +
> > > > + count = domain->bounce_size >> PAGE_SHIFT;
> > > > + pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL | __GFP_NOFAIL);
> > > > + for (i = 0; i < count; i++)
> > > > + pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
> > > > +
> > > > + return pages;
> > > > +}
> > > > +
> > > > +void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain, struct page **pages)
> > > > {
> > > > struct vduse_bounce_map *map;
> > > > unsigned long i, count;
> > > > @@ -294,15 +310,16 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
> > > >
> > > > count = domain->bounce_size >> PAGE_SHIFT;
> > > > for (i = 0; i < count; i++) {
> > > > - struct page *page = NULL;
> > > > + struct page *page = pages[i];
> > > >
> > > > map = &domain->bounce_maps[i];
> > > > - if (WARN_ON(!map->bounce_page))
> > > > + if (WARN_ON(!map->bounce_page)) {
> > > > + put_page(page);
> > > > continue;
> > > > + }
> > > >
> > > > /* Copy user page to kernel page if it's in use */
> > > > if (map->orig_phys != INVALID_PHYS_ADDR) {
> > > > - page = alloc_page(GFP_ATOMIC | __GFP_NOFAIL);
> > > > memcpy_from_page(page_address(page),
> > > > map->bounce_page, 0, PAGE_SIZE);
> > > > }
> > > > @@ -310,6 +327,7 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
> > > > map->bounce_page = page;
> > > > }
> > > > domain->user_bounce_pages = false;
> > > > + kfree(pages);
> > > > out:
> > > > write_unlock(&domain->bounce_lock);
> > > > }
> > > > @@ -543,10 +561,13 @@ static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
> > > > static int vduse_domain_release(struct inode *inode, struct file *file)
> > > > {
> > > > struct vduse_iova_domain *domain = file->private_data;
> > > > + struct page **pages;
> > > > +
> > > > + pages = vduse_domain_alloc_pages_to_remove_bounce(domain);
> > > >
> > > > spin_lock(&domain->iotlb_lock);
> > > > vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
> > > > - vduse_domain_remove_user_bounce_pages(domain);
> > > > + vduse_domain_remove_user_bounce_pages(domain, pages);
> > > > vduse_domain_free_kernel_bounce_pages(domain);
> > > > spin_unlock(&domain->iotlb_lock);
> > > > put_iova_domain(&domain->stream_iovad);
> > > > diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
> > > > index f92f22a7267d..17efa5555b3f 100644
> > > > --- a/drivers/vdpa/vdpa_user/iova_domain.h
> > > > +++ b/drivers/vdpa/vdpa_user/iova_domain.h
> > > > @@ -74,7 +74,10 @@ void vduse_domain_reset_bounce_map(struct vduse_iova_domain *domain);
> > > > int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
> > > > struct page **pages, int count);
> > > >
> > > > -void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain);
> > > > +void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain,
> > > > + struct page **pages);
> > > > +
> > > > +struct page **vduse_domain_alloc_pages_to_remove_bounce(struct vduse_iova_domain *domain);
> > > >
> > > > void vduse_domain_destroy(struct vduse_iova_domain *domain);
> > > >
> > > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > index 7ae99691efdf..5d8d5810df57 100644
> > > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > @@ -1030,6 +1030,7 @@ static int vduse_dev_queue_irq_work(struct vduse_dev *dev,
> > > > static int vduse_dev_dereg_umem(struct vduse_dev *dev,
> > > > u64 iova, u64 size)
> > > > {
> > > > + struct page **pages;
> > > > int ret;
> > > >
> > > > mutex_lock(&dev->mem_lock);
> > > > @@ -1044,7 +1045,8 @@ static int vduse_dev_dereg_umem(struct vduse_dev *dev,
> > > > if (dev->umem->iova != iova || size != dev->domain->bounce_size)
> > > > goto unlock;
> > > >
> > > > - vduse_domain_remove_user_bounce_pages(dev->domain);
> > > > + pages = vduse_domain_alloc_pages_to_remove_bounce(dev->domain);
> > > > + vduse_domain_remove_user_bounce_pages(dev->domain, pages);
> > > > unpin_user_pages_dirty_lock(dev->umem->pages,
> > > > dev->umem->npages, true);
> > > > atomic64_sub(dev->umem->npages, &dev->umem->mm->pinned_vm);
> > >
> > > We miss a kfree(pages); here?
> > no.
> > i've moved it into vduse_domain_remove_user_bounce_pages.
>
> Ok, but it seems tricky e.g allocated by the caller but freed in
> callee. And I think I missed some important issues in the previous
> review: The check of user_bounce_pages must be done under the
> bounce_lock, otherwise it might race with umem_reg.
>
> So in the case of release(), we know the device is gone, so there's no
> need to allocate pages that will be released soon. So we can pass NULL
> as a hint and just assign bounce_page to NULL in
> vduse_domain_remove_user_bounce_pages().
>
> And in the case of vduse_dev_dereg_umem(), we need to allocate the
> pages without checking user_bounce_pages. So in
> vduse_domain_remove_user_bounce_pages() if we can free the allocated
> pages as well as the pages in the following check
>
> if (!domain->user_bounce_pages)
> goto out;
>
> What do you think?
I am not a vdpa guy, but changing the current logic is another patch.
From mm perspective, I can only address the __GFP_NOFAIL issue.
I actually prefer you guys handle it directly:-) I'd rather report a BUG
instead. TBH, I know nothing about vpda.
>
> Thanks
>
> >
> > >
> > > Thanks
> > >
> > > > --
> > > > 2.34.1
> > > >
> > >
Thanks
Barry
next prev parent reply other threads:[~2024-07-31 4:11 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-31 0:01 [PATCH v2 0/4] mm: clarify nofail memory allocation Barry Song
2024-07-31 0:01 ` [PATCH RFT v2 1/4] vpda: try to fix the potential crash due to misusing __GFP_NOFAIL Barry Song
2024-07-31 3:09 ` Jason Wang
2024-07-31 3:15 ` Barry Song
2024-07-31 3:58 ` Jason Wang
2024-07-31 4:11 ` Barry Song [this message]
2024-07-31 4:13 ` Jason Wang
2024-07-31 5:05 ` Barry Song
2024-07-31 10:20 ` Tetsuo Handa
2024-08-01 2:37 ` Jason Wang
2024-08-05 1:32 ` Barry Song
2024-08-05 8:19 ` Jason Wang
2024-08-01 2:30 ` Jason Wang
2024-07-31 0:01 ` [PATCH v2 2/4] mm: Document __GFP_NOFAIL must be blockable Barry Song
2024-07-31 10:18 ` Vlastimil Babka
2024-07-31 16:26 ` Christoph Hellwig
2024-07-31 0:01 ` [PATCH v2 3/4] mm: BUG_ON to avoid NULL deference while __GFP_NOFAIL fails Barry Song
2024-07-31 7:11 ` Michal Hocko
2024-07-31 10:29 ` Vlastimil Babka
2024-07-31 10:44 ` Tetsuo Handa
2024-07-31 10:48 ` Vlastimil Babka
2024-07-31 10:57 ` Barry Song
2024-07-31 16:28 ` Christoph Hellwig
2024-07-31 0:01 ` [PATCH v2 4/4] mm: prohibit NULL deference exposed for unsupported non-blockable __GFP_NOFAIL Barry Song
2024-07-31 7:15 ` Michal Hocko
2024-07-31 10:55 ` Vlastimil Babka
2024-07-31 11:08 ` Barry Song
2024-07-31 11:31 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGsJ_4z+-XdAEt+XrxGTnoB4PnimKXhg0JciiwPST-OQYit+-g@mail.gmail.com \
--to=21cnbao@gmail.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=eperezma@redhat.com \
--cc=hailong.liu@oppo.com \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=jasowang@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=maxime.coquelin@redhat.com \
--cc=mhocko@suse.com \
--cc=mst@redhat.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=torvalds@linux-foundation.org \
--cc=urezki@gmail.com \
--cc=v-songbaohua@oppo.com \
--cc=vbabka@suse.cz \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).