* [PATCH net] net: add net_iov_init() and use it to initialize ->page_type
@ 2026-04-28 2:53 Jakub Kicinski
2026-04-28 7:57 ` Vlastimil Babka (SUSE)
0 siblings, 1 reply; 3+ messages in thread
From: Jakub Kicinski @ 2026-04-28 2:53 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, Jakub Kicinski,
asml.silence, axboe, almasrymina, sdf, hawk, akpm, rppt, vbabka,
io-uring
Commit db359fccf212 ("mm: introduce a new page type for page pool in
page type") added a page_type field to struct net_iov at the same
offset as struct page::page_type, so that page_pool_set_pp_info() can
call __SetPageNetpp() uniformly on both pages and net_iovs.
The page-type API requires the field to hold the UINT_MAX "no type"
sentinel before a type can be set; for real struct page that invariant
is established by the page allocator on free. struct net_iov is not
allocated through the page allocator, so the field is left as zero
(io_uring zcrx, which uses __GFP_ZERO) or as slab garbage (devmem,
which uses kvmalloc_objs() without zeroing). When the page pool then
calls page_pool_set_pp_info() on a freshly-bound niov,
__SetPageNetpp()'s VM_BUG_ON_PAGE(page->page_type != UINT_MAX) fires
and the kernel BUGs. Triggered in selftests by io_uring zcrx setup
through the fbnic queue restart path:
kernel BUG at ./include/linux/page-flags.h:1062!
RIP: 0010:page_pool_set_pp_info (./include/linux/page-flags.h:1062
net/core/page_pool.c:716)
Call Trace:
<TASK>
net_mp_niov_set_page_pool (net/core/page_pool.c:1360)
io_pp_zc_alloc_netmems (io_uring/zcrx.c:1089 io_uring/zcrx.c:1110)
fbnic_fill_bdq (./include/net/page_pool/helpers.h:160
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:906)
__fbnic_nv_restart (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2470
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2874)
fbnic_queue_start (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2903)
netdev_rx_queue_reconfig (net/core/netdev_rx_queue.c:137)
__netif_mp_open_rxq (net/core/netdev_rx_queue.c:234)
io_register_zcrx (io_uring/zcrx.c:818 io_uring/zcrx.c:903)
__io_uring_register (io_uring/register.c:931)
__do_sys_io_uring_register (io_uring/register.c:1029)
do_syscall_64 (arch/x86/entry/syscall_64.c:63
arch/x86/entry/syscall_64.c:94)
</TASK>
The same path is reachable through devmem dmabuf binding via
netdev_nl_bind_rx_doit() -> net_devmem_bind_dmabuf_to_queue().
Add a net_iov_init() helper that stamps ->owner, ->type and the
->page_type sentinel, and use it from both the devmem and io_uring
zcrx niov init loops.
Fixes: db359fccf212 ("mm: introduce a new page type for page pool in page type")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: asml.silence@gmail.com
CC: axboe@kernel.dk
CC: almasrymina@google.com
CC: sdf@fomichev.me
CC: hawk@kernel.org
CC: akpm@linux-foundation.org
CC: rppt@kernel.org
CC: vbabka@kernel.org
CC: io-uring@vger.kernel.org
---
include/net/netmem.h | 15 +++++++++++++++
io_uring/zcrx.c | 3 +--
net/core/devmem.c | 3 +--
3 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/include/net/netmem.h b/include/net/netmem.h
index 507b74c9f52d..78fe51e5756b 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -127,6 +127,21 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
return niov - net_iov_owner(niov)->niovs;
}
+/* Initialize a niov: stamp the owning area, the memory provider type,
+ * and the page_type "no type" sentinel expected by the page-type API
+ * (see PAGE_TYPE_OPS in <linux/page-flags.h>) so that
+ * page_pool_set_pp_info() can later call __SetPageNetpp() on a niov
+ * cast to struct page.
+ */
+static inline void net_iov_init(struct net_iov *niov,
+ struct net_iov_area *owner,
+ enum net_iov_type type)
+{
+ niov->owner = owner;
+ niov->type = type;
+ niov->page_type = UINT_MAX;
+}
+
/* netmem */
/**
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 7b93c87b8371..19837e0b5e91 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -495,10 +495,9 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
for (i = 0; i < nr_iovs; i++) {
struct net_iov *niov = &area->nia.niovs[i];
- niov->owner = &area->nia;
+ net_iov_init(niov, &area->nia, NET_IOV_IOURING);
area->freelist[i] = i;
atomic_set(&area->user_refs[i], 0);
- niov->type = NET_IOV_IOURING;
}
if (ifq->dev) {
diff --git a/net/core/devmem.c b/net/core/devmem.c
index cde4c89bc146..468344739db2 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -297,8 +297,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
for (i = 0; i < owner->area.num_niovs; i++) {
niov = &owner->area.niovs[i];
- niov->type = NET_IOV_DMABUF;
- niov->owner = &owner->area;
+ net_iov_init(niov, &owner->area, NET_IOV_DMABUF);
page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
net_devmem_get_dma_addr(niov));
if (direction == DMA_TO_DEVICE)
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net] net: add net_iov_init() and use it to initialize ->page_type
2026-04-28 2:53 [PATCH net] net: add net_iov_init() and use it to initialize ->page_type Jakub Kicinski
@ 2026-04-28 7:57 ` Vlastimil Babka (SUSE)
2026-04-28 8:14 ` Byungchul Park
0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-04-28 7:57 UTC (permalink / raw)
To: Jakub Kicinski, davem, Byungchul Park
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, asml.silence,
axboe, almasrymina, sdf, hawk, akpm, rppt, io-uring
+Cc Byungchul
On 4/28/26 04:53, Jakub Kicinski wrote:
> Commit db359fccf212 ("mm: introduce a new page type for page pool in
> page type") added a page_type field to struct net_iov at the same
> offset as struct page::page_type, so that page_pool_set_pp_info() can
> call __SetPageNetpp() uniformly on both pages and net_iovs.
>
> The page-type API requires the field to hold the UINT_MAX "no type"
> sentinel before a type can be set; for real struct page that invariant
> is established by the page allocator on free. struct net_iov is not
> allocated through the page allocator, so the field is left as zero
> (io_uring zcrx, which uses __GFP_ZERO) or as slab garbage (devmem,
> which uses kvmalloc_objs() without zeroing). When the page pool then
> calls page_pool_set_pp_info() on a freshly-bound niov,
> __SetPageNetpp()'s VM_BUG_ON_PAGE(page->page_type != UINT_MAX) fires
> and the kernel BUGs. Triggered in selftests by io_uring zcrx setup
> through the fbnic queue restart path:
>
> kernel BUG at ./include/linux/page-flags.h:1062!
> RIP: 0010:page_pool_set_pp_info (./include/linux/page-flags.h:1062
> net/core/page_pool.c:716)
> Call Trace:
> <TASK>
> net_mp_niov_set_page_pool (net/core/page_pool.c:1360)
> io_pp_zc_alloc_netmems (io_uring/zcrx.c:1089 io_uring/zcrx.c:1110)
> fbnic_fill_bdq (./include/net/page_pool/helpers.h:160
> drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:906)
> __fbnic_nv_restart (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2470
> drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2874)
> fbnic_queue_start (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2903)
> netdev_rx_queue_reconfig (net/core/netdev_rx_queue.c:137)
> __netif_mp_open_rxq (net/core/netdev_rx_queue.c:234)
> io_register_zcrx (io_uring/zcrx.c:818 io_uring/zcrx.c:903)
> __io_uring_register (io_uring/register.c:931)
> __do_sys_io_uring_register (io_uring/register.c:1029)
> do_syscall_64 (arch/x86/entry/syscall_64.c:63
> arch/x86/entry/syscall_64.c:94)
> </TASK>
>
> The same path is reachable through devmem dmabuf binding via
> netdev_nl_bind_rx_doit() -> net_devmem_bind_dmabuf_to_queue().
>
> Add a net_iov_init() helper that stamps ->owner, ->type and the
> ->page_type sentinel, and use it from both the devmem and io_uring
> zcrx niov init loops.
>
> Fixes: db359fccf212 ("mm: introduce a new page type for page pool in page type")
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> ---
> CC: asml.silence@gmail.com
> CC: axboe@kernel.dk
> CC: almasrymina@google.com
> CC: sdf@fomichev.me
> CC: hawk@kernel.org
> CC: akpm@linux-foundation.org
> CC: rppt@kernel.org
> CC: vbabka@kernel.org
> CC: io-uring@vger.kernel.org
> ---
> include/net/netmem.h | 15 +++++++++++++++
> io_uring/zcrx.c | 3 +--
> net/core/devmem.c | 3 +--
> 3 files changed, 17 insertions(+), 4 deletions(-)
>
> diff --git a/include/net/netmem.h b/include/net/netmem.h
> index 507b74c9f52d..78fe51e5756b 100644
> --- a/include/net/netmem.h
> +++ b/include/net/netmem.h
> @@ -127,6 +127,21 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
> return niov - net_iov_owner(niov)->niovs;
> }
>
> +/* Initialize a niov: stamp the owning area, the memory provider type,
> + * and the page_type "no type" sentinel expected by the page-type API
> + * (see PAGE_TYPE_OPS in <linux/page-flags.h>) so that
> + * page_pool_set_pp_info() can later call __SetPageNetpp() on a niov
> + * cast to struct page.
> + */
> +static inline void net_iov_init(struct net_iov *niov,
> + struct net_iov_area *owner,
> + enum net_iov_type type)
> +{
> + niov->owner = owner;
> + niov->type = type;
> + niov->page_type = UINT_MAX;
> +}
> +
> /* netmem */
>
> /**
> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> index 7b93c87b8371..19837e0b5e91 100644
> --- a/io_uring/zcrx.c
> +++ b/io_uring/zcrx.c
> @@ -495,10 +495,9 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
> for (i = 0; i < nr_iovs; i++) {
> struct net_iov *niov = &area->nia.niovs[i];
>
> - niov->owner = &area->nia;
> + net_iov_init(niov, &area->nia, NET_IOV_IOURING);
> area->freelist[i] = i;
> atomic_set(&area->user_refs[i], 0);
> - niov->type = NET_IOV_IOURING;
> }
>
> if (ifq->dev) {
> diff --git a/net/core/devmem.c b/net/core/devmem.c
> index cde4c89bc146..468344739db2 100644
> --- a/net/core/devmem.c
> +++ b/net/core/devmem.c
> @@ -297,8 +297,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
>
> for (i = 0; i < owner->area.num_niovs; i++) {
> niov = &owner->area.niovs[i];
> - niov->type = NET_IOV_DMABUF;
> - niov->owner = &owner->area;
> + net_iov_init(niov, &owner->area, NET_IOV_DMABUF);
> page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
> net_devmem_get_dma_addr(niov));
> if (direction == DMA_TO_DEVICE)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH net] net: add net_iov_init() and use it to initialize ->page_type
2026-04-28 7:57 ` Vlastimil Babka (SUSE)
@ 2026-04-28 8:14 ` Byungchul Park
0 siblings, 0 replies; 3+ messages in thread
From: Byungchul Park @ 2026-04-28 8:14 UTC (permalink / raw)
To: Vlastimil Babka (SUSE)
Cc: Jakub Kicinski, davem, netdev, edumazet, pabeni, andrew+netdev,
horms, asml.silence, axboe, almasrymina, sdf, hawk, akpm, rppt,
io-uring, kernel_team
On Tue, Apr 28, 2026 at 09:57:08AM +0200, Vlastimil Babka (SUSE) wrote:
> +Cc Byungchul
>
> On 4/28/26 04:53, Jakub Kicinski wrote:
> > Commit db359fccf212 ("mm: introduce a new page type for page pool in
> > page type") added a page_type field to struct net_iov at the same
> > offset as struct page::page_type, so that page_pool_set_pp_info() can
> > call __SetPageNetpp() uniformly on both pages and net_iovs.
> >
> > The page-type API requires the field to hold the UINT_MAX "no type"
> > sentinel before a type can be set; for real struct page that invariant
> > is established by the page allocator on free. struct net_iov is not
> > allocated through the page allocator, so the field is left as zero
My bad. Overlooked the point. Thanks for the fix.
Acked-by: Byungchul Park <byungchul@sk.com>
Byungchul
> > (io_uring zcrx, which uses __GFP_ZERO) or as slab garbage (devmem,
> > which uses kvmalloc_objs() without zeroing). When the page pool then
> > calls page_pool_set_pp_info() on a freshly-bound niov,
> > __SetPageNetpp()'s VM_BUG_ON_PAGE(page->page_type != UINT_MAX) fires
> > and the kernel BUGs. Triggered in selftests by io_uring zcrx setup
> > through the fbnic queue restart path:
> >
> > kernel BUG at ./include/linux/page-flags.h:1062!
> > RIP: 0010:page_pool_set_pp_info (./include/linux/page-flags.h:1062
> > net/core/page_pool.c:716)
> > Call Trace:
> > <TASK>
> > net_mp_niov_set_page_pool (net/core/page_pool.c:1360)
> > io_pp_zc_alloc_netmems (io_uring/zcrx.c:1089 io_uring/zcrx.c:1110)
> > fbnic_fill_bdq (./include/net/page_pool/helpers.h:160
> > drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:906)
> > __fbnic_nv_restart (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2470
> > drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2874)
> > fbnic_queue_start (drivers/net/ethernet/meta/fbnic/fbnic_txrx.c:2903)
> > netdev_rx_queue_reconfig (net/core/netdev_rx_queue.c:137)
> > __netif_mp_open_rxq (net/core/netdev_rx_queue.c:234)
> > io_register_zcrx (io_uring/zcrx.c:818 io_uring/zcrx.c:903)
> > __io_uring_register (io_uring/register.c:931)
> > __do_sys_io_uring_register (io_uring/register.c:1029)
> > do_syscall_64 (arch/x86/entry/syscall_64.c:63
> > arch/x86/entry/syscall_64.c:94)
> > </TASK>
> >
> > The same path is reachable through devmem dmabuf binding via
> > netdev_nl_bind_rx_doit() -> net_devmem_bind_dmabuf_to_queue().
> >
> > Add a net_iov_init() helper that stamps ->owner, ->type and the
> > ->page_type sentinel, and use it from both the devmem and io_uring
> > zcrx niov init loops.
> >
> > Fixes: db359fccf212 ("mm: introduce a new page type for page pool in page type")
> > Signed-off-by: Jakub Kicinski <kuba@kernel.org>
>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
>
> > ---
> > CC: asml.silence@gmail.com
> > CC: axboe@kernel.dk
> > CC: almasrymina@google.com
> > CC: sdf@fomichev.me
> > CC: hawk@kernel.org
> > CC: akpm@linux-foundation.org
> > CC: rppt@kernel.org
> > CC: vbabka@kernel.org
> > CC: io-uring@vger.kernel.org
> > ---
> > include/net/netmem.h | 15 +++++++++++++++
> > io_uring/zcrx.c | 3 +--
> > net/core/devmem.c | 3 +--
> > 3 files changed, 17 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/net/netmem.h b/include/net/netmem.h
> > index 507b74c9f52d..78fe51e5756b 100644
> > --- a/include/net/netmem.h
> > +++ b/include/net/netmem.h
> > @@ -127,6 +127,21 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
> > return niov - net_iov_owner(niov)->niovs;
> > }
> >
> > +/* Initialize a niov: stamp the owning area, the memory provider type,
> > + * and the page_type "no type" sentinel expected by the page-type API
> > + * (see PAGE_TYPE_OPS in <linux/page-flags.h>) so that
> > + * page_pool_set_pp_info() can later call __SetPageNetpp() on a niov
> > + * cast to struct page.
> > + */
> > +static inline void net_iov_init(struct net_iov *niov,
> > + struct net_iov_area *owner,
> > + enum net_iov_type type)
> > +{
> > + niov->owner = owner;
> > + niov->type = type;
> > + niov->page_type = UINT_MAX;
> > +}
> > +
> > /* netmem */
> >
> > /**
> > diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> > index 7b93c87b8371..19837e0b5e91 100644
> > --- a/io_uring/zcrx.c
> > +++ b/io_uring/zcrx.c
> > @@ -495,10 +495,9 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
> > for (i = 0; i < nr_iovs; i++) {
> > struct net_iov *niov = &area->nia.niovs[i];
> >
> > - niov->owner = &area->nia;
> > + net_iov_init(niov, &area->nia, NET_IOV_IOURING);
> > area->freelist[i] = i;
> > atomic_set(&area->user_refs[i], 0);
> > - niov->type = NET_IOV_IOURING;
> > }
> >
> > if (ifq->dev) {
> > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > index cde4c89bc146..468344739db2 100644
> > --- a/net/core/devmem.c
> > +++ b/net/core/devmem.c
> > @@ -297,8 +297,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
> >
> > for (i = 0; i < owner->area.num_niovs; i++) {
> > niov = &owner->area.niovs[i];
> > - niov->type = NET_IOV_DMABUF;
> > - niov->owner = &owner->area;
> > + net_iov_init(niov, &owner->area, NET_IOV_DMABUF);
> > page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
> > net_devmem_get_dma_addr(niov));
> > if (direction == DMA_TO_DEVICE)
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-28 8:30 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 2:53 [PATCH net] net: add net_iov_init() and use it to initialize ->page_type Jakub Kicinski
2026-04-28 7:57 ` Vlastimil Babka (SUSE)
2026-04-28 8:14 ` Byungchul Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox