public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Byungchul Park <byungchul@sk.com>
To: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Cc: Jakub Kicinski <kuba@kernel.org>,
	davem@davemloft.net, netdev@vger.kernel.org, edumazet@google.com,
	pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org,
	asml.silence@gmail.com, axboe@kernel.dk, almasrymina@google.com,
	sdf@fomichev.me, hawk@kernel.org, akpm@linux-foundation.org,
	rppt@kernel.org, io-uring@vger.kernel.org,
	kernel_team@skhynix.com
Subject: Re: [PATCH net] net: add net_iov_init() and use it to initialize ->page_type
Date: Tue, 28 Apr 2026 17:14:53 +0900	[thread overview]
Message-ID: <20260428081453.GA29789@system.software.com> (raw)
In-Reply-To: <6dcbee42-df9b-4c0a-b153-aad953441fad@kernel.org>

On Tue, Apr 28, 2026 at 09:57:08AM +0200, Vlastimil Babka (SUSE) wrote:
> +Cc Byungchul
> 
> On 4/28/26 04:53, Jakub Kicinski wrote:
> > Commit db359fccf212 ("mm: introduce a new page type for page pool in
> > page type") added a page_type field to struct net_iov at the same
> > offset as struct page::page_type, so that page_pool_set_pp_info() can
> > call __SetPageNetpp() uniformly on both pages and net_iovs.
> >
> > The page-type API requires the field to hold the UINT_MAX "no type"
> > sentinel before a type can be set; for real struct page that invariant
> > is established by the page allocator on free. struct net_iov is not
> > allocated through the page allocator, so the field is left as zero

My bad.  Overlooked the point.  Thanks for the fix.

Acked-by: Byungchul Park <byungchul@sk.com>

	Byungchul

> > (io_uring zcrx, which uses __GFP_ZERO) or as slab garbage (devmem,
> > which uses kvmalloc_objs() without zeroing). When the page pool then
> > calls page_pool_set_pp_info() on a freshly-bound niov,
> > __SetPageNetpp()'s VM_BUG_ON_PAGE(page->page_type != UINT_MAX) fires
> > and the kernel BUGs. Triggered in selftests by io_uring zcrx setup
> > through the fbnic queue restart path:
> >
> >  kernel BUG at ./include/linux/page-flags.h:1062!
> >  RIP: 0010:page_pool_set_pp_info (./include/linux/page-flags.h:1062
> >                                   net/core/page_pool.c:716)
> >  Call Trace:
> >   <TASK>
> >   net_mp_niov_set_page_pool (net/core/page_pool.c:1360)
> >   io_pp_zc_alloc_netmems (io_uring/zcrx.c:1089 io_uring/zcrx.c:1110)
> >   fbnic_fill_bdq (./include/net/page_pool/helpers.h:160
> >                   drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:906)
> >   __fbnic_nv_restart (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2470
> >                       drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2874)
> >   fbnic_queue_start (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2903)
> >   netdev_rx_queue_reconfig (net/core/netdev_rx_queue.c:137)
> >   __netif_mp_open_rxq (net/core/netdev_rx_queue.c:234)
> >   io_register_zcrx (io_uring/zcrx.c:818 io_uring/zcrx.c:903)
> >   __io_uring_register (io_uring/register.c:931)
> >   __do_sys_io_uring_register (io_uring/register.c:1029)
> >   do_syscall_64 (arch/x86/entry/syscall_64.c:63
> >                  arch/x86/entry/syscall_64.c:94)
> >   </TASK>
> >
> > The same path is reachable through devmem dmabuf binding via
> > netdev_nl_bind_rx_doit() -> net_devmem_bind_dmabuf_to_queue().
> >
> > Add a net_iov_init() helper that stamps ->owner, ->type and the
> > ->page_type sentinel, and use it from both the devmem and io_uring
> > zcrx niov init loops.
> >
> > Fixes: db359fccf212 ("mm: introduce a new page type for page pool in page type")
> > Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> 
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> 
> > ---
> > CC: asml.silence@gmail.com
> > CC: axboe@kernel.dk
> > CC: almasrymina@google.com
> > CC: sdf@fomichev.me
> > CC: hawk@kernel.org
> > CC: akpm@linux-foundation.org
> > CC: rppt@kernel.org
> > CC: vbabka@kernel.org
> > CC: io-uring@vger.kernel.org
> > ---
> >  include/net/netmem.h | 15 +++++++++++++++
> >  io_uring/zcrx.c      |  3 +--
> >  net/core/devmem.c    |  3 +--
> >  3 files changed, 17 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/net/netmem.h b/include/net/netmem.h
> > index 507b74c9f52d..78fe51e5756b 100644
> > --- a/include/net/netmem.h
> > +++ b/include/net/netmem.h
> > @@ -127,6 +127,21 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
> >       return niov - net_iov_owner(niov)->niovs;
> >  }
> >
> > +/* Initialize a niov: stamp the owning area, the memory provider type,
> > + * and the page_type "no type" sentinel expected by the page-type API
> > + * (see PAGE_TYPE_OPS in <linux/page-flags.h>) so that
> > + * page_pool_set_pp_info() can later call __SetPageNetpp() on a niov
> > + * cast to struct page.
> > + */
> > +static inline void net_iov_init(struct net_iov *niov,
> > +                             struct net_iov_area *owner,
> > +                             enum net_iov_type type)
> > +{
> > +     niov->owner = owner;
> > +     niov->type = type;
> > +     niov->page_type = UINT_MAX;
> > +}
> > +
> >  /* netmem */
> >
> >  /**
> > diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> > index 7b93c87b8371..19837e0b5e91 100644
> > --- a/io_uring/zcrx.c
> > +++ b/io_uring/zcrx.c
> > @@ -495,10 +495,9 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
> >       for (i = 0; i < nr_iovs; i++) {
> >               struct net_iov *niov = &area->nia.niovs[i];
> >
> > -             niov->owner = &area->nia;
> > +             net_iov_init(niov, &area->nia, NET_IOV_IOURING);
> >               area->freelist[i] = i;
> >               atomic_set(&area->user_refs[i], 0);
> > -             niov->type = NET_IOV_IOURING;
> >       }
> >
> >       if (ifq->dev) {
> > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > index cde4c89bc146..468344739db2 100644
> > --- a/net/core/devmem.c
> > +++ b/net/core/devmem.c
> > @@ -297,8 +297,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
> >
> >               for (i = 0; i < owner->area.num_niovs; i++) {
> >                       niov = &owner->area.niovs[i];
> > -                     niov->type = NET_IOV_DMABUF;
> > -                     niov->owner = &owner->area;
> > +                     net_iov_init(niov, &owner->area, NET_IOV_DMABUF);
> >                       page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
> >                                                     net_devmem_get_dma_addr(niov));
> >                       if (direction == DMA_TO_DEVICE)

      reply	other threads:[~2026-04-28  8:30 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-28  2:53 [PATCH net] net: add net_iov_init() and use it to initialize ->page_type Jakub Kicinski
2026-04-28  7:57 ` Vlastimil Babka (SUSE)
2026-04-28  8:14   ` Byungchul Park [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260428081453.GA29789@system.software.com \
    --to=byungchul@sk.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=io-uring@vger.kernel.org \
    --cc=kernel_team@skhynix.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=rppt@kernel.org \
    --cc=sdf@fomichev.me \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox