netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Horman <horms@kernel.org>
To: Jason Xing <kerneljasonxing@gmail.com>
Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	pabeni@redhat.com, bjorn@kernel.org, magnus.karlsson@intel.com,
	maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com,
	sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net,
	hawk@kernel.org, john.fastabend@gmail.com, joe@dama.to,
	willemdebruijn.kernel@gmail.com, bpf@vger.kernel.org,
	netdev@vger.kernel.org, Jason Xing <kernelxing@tencent.com>
Subject: Re: [PATCH net-next v3 3/9] xsk: add xsk_alloc_batch_skb() to build skbs in batch
Date: Fri, 24 Oct 2025 14:33:42 +0100	[thread overview]
Message-ID: <aPuANsZ6_xj8YY3D@horms.kernel.org> (raw)
In-Reply-To: <20251021131209.41491-4-kerneljasonxing@gmail.com>

On Tue, Oct 21, 2025 at 09:12:03PM +0800, Jason Xing wrote:

...

> diff --git a/net/core/skbuff.c b/net/core/skbuff.c

...

> @@ -615,6 +617,105 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
>  	return obj;
>  }
>  
> +int xsk_alloc_batch_skb(struct xdp_sock *xs, u32 nb_pkts, u32 nb_descs, int *err)
> +{
> +	struct xsk_batch *batch = &xs->batch;
> +	struct xdp_desc *descs = batch->desc_cache;
> +	struct sk_buff **skbs = batch->skb_cache;
> +	gfp_t gfp_mask = xs->sk.sk_allocation;
> +	struct net_device *dev = xs->dev;
> +	int node = NUMA_NO_NODE;
> +	struct sk_buff *skb;
> +	u32 i = 0, j = 0;
> +	bool pfmemalloc;
> +	u32 base_len;
> +	u8 *data;
> +
> +	base_len = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
> +	if (!(dev->priv_flags & IFF_TX_SKB_NO_LINEAR))
> +		base_len += dev->needed_tailroom;
> +
> +	if (batch->skb_count >= nb_pkts)
> +		goto build;
> +
> +	if (xs->skb) {
> +		i = 1;
> +		batch->skb_count++;
> +	}
> +
> +	batch->skb_count += kmem_cache_alloc_bulk(net_hotdata.skbuff_cache,
> +						  gfp_mask, nb_pkts - batch->skb_count,
> +						  (void **)&skbs[batch->skb_count]);
> +	if (batch->skb_count < nb_pkts)
> +		nb_pkts = batch->skb_count;
> +
> +build:
> +	for (i = 0, j = 0; j < nb_descs; j++) {
> +		if (!xs->skb) {
> +			u32 size = base_len + descs[j].len;
> +
> +			/* In case we don't have enough allocated skbs */
> +			if (i >= nb_pkts) {
> +				*err = -EAGAIN;
> +				break;
> +			}
> +
> +			if (sk_wmem_alloc_get(&xs->sk) > READ_ONCE(xs->sk.sk_sndbuf)) {
> +				*err = -EAGAIN;
> +				break;
> +			}
> +
> +			skb = skbs[batch->skb_count - 1 - i];
> +
> +			prefetchw(skb);
> +			/* We do our best to align skb_shared_info on a separate cache
> +			 * line. It usually works because kmalloc(X > SMP_CACHE_BYTES) gives
> +			 * aligned memory blocks, unless SLUB/SLAB debug is enabled.
> +			 * Both skb->head and skb_shared_info are cache line aligned.
> +			 */
> +			data = kmalloc_reserve(&size, gfp_mask, node, &pfmemalloc);
> +			if (unlikely(!data)) {
> +				*err = -ENOBUFS;
> +				break;
> +			}
> +			/* kmalloc_size_roundup() might give us more room than requested.
> +			 * Put skb_shared_info exactly at the end of allocated zone,
> +			 * to allow max possible filling before reallocation.
> +			 */
> +			prefetchw(data + SKB_WITH_OVERHEAD(size));
> +
> +			memset(skb, 0, offsetof(struct sk_buff, tail));
> +			__build_skb_around(skb, data, size);
> +			skb->pfmemalloc = pfmemalloc;
> +			skb_set_owner_w(skb, &xs->sk);
> +		} else if (unlikely(i == 0)) {
> +			/* We have a skb in cache that is left last time */
> +			kmem_cache_free(net_hotdata.skbuff_cache,
> +					skbs[batch->skb_count - 1]);
> +			skbs[batch->skb_count - 1] = xs->skb;
> +		}
> +
> +		skb = xsk_build_skb(xs, skb, &descs[j]);

Hi Jason,

Perhaps it cannot occur, but if we reach this line
without the if (!xs->skb) condition having been met for
any iteration of there loop this code sits inside,
then skb will be uninitialised here.

Also, assuming the above doesn't occur, and perhaps this
next case is intentional, but if the same condition is
not met for any iteration of the loop, then skb will have
its value from a prior iteration.

Flagged by Smatch.

> +		if (IS_ERR(skb)) {
> +			*err = PTR_ERR(skb);
> +			break;
> +		}
> +
> +		if (xp_mb_desc(&descs[j])) {
> +			xs->skb = skb;
> +			continue;
> +		}
> +
> +		xs->skb = NULL;
> +		i++;
> +		__skb_queue_tail(&batch->send_queue, skb);
> +	}
> +
> +	batch->skb_count -= i;
> +
> +	return j;
> +}

  parent reply	other threads:[~2025-10-24 13:33 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-21 13:12 [PATCH net-next v3 0/9] xsk: batch xmit in copy mode Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 1/9] xsk: introduce XDP_GENERIC_XMIT_BATCH setsockopt Jason Xing
2025-10-24 13:30   ` Simon Horman
2025-10-25  9:08     ` Jason Xing
2025-10-28 14:44       ` Simon Horman
2025-10-29  0:00         ` Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 2/9] xsk: extend xsk_build_skb() to support passing an already allocated skb Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 3/9] xsk: add xsk_alloc_batch_skb() to build skbs in batch Jason Xing
2025-10-23 17:30   ` kernel test robot
2025-10-23 18:25   ` kernel test robot
2025-10-24 13:33   ` Simon Horman [this message]
2025-10-25  9:26     ` Jason Xing
2025-10-24 18:49   ` Stanislav Fomichev
2025-10-25  9:11     ` Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 4/9] xsk: add direct xmit in batch function Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 5/9] xsk: rename nb_pkts to nb_descs in xsk_tx_peek_release_desc_batch Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 6/9] xsk: extend xskq_cons_read_desc_batch to count nb_pkts Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 7/9] xsk: support batch xmit main logic Jason Xing
2025-10-24 13:32   ` Simon Horman
2025-10-25  9:09     ` Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 8/9] xsk: support generic batch xmit in copy mode Jason Xing
2025-10-24 18:52   ` Stanislav Fomichev
2025-10-25  9:28     ` Jason Xing
2025-10-21 13:12 ` [PATCH net-next v3 9/9] xsk: support dynamic xmit.more control for batch xmit Jason Xing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aPuANsZ6_xj8YY3D@horms.kernel.org \
    --to=horms@kernel.org \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=joe@dama.to \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kerneljasonxing@gmail.com \
    --cc=kernelxing@tencent.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).