bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Simon Horman <horms@kernel.org>
Cc: <intel-wired-lan@lists.osuosl.org>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	Przemek Kitszel <przemyslaw.kitszel@intel.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	<nxne.cnse.osdt.itp.upstreaming@intel.com>, <bpf@vger.kernel.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH iwl-next v3 16/18] idpf: add support for XDP on Rx
Date: Fri, 1 Aug 2025 15:11:38 +0200	[thread overview]
Message-ID: <9538e649-0e9c-45b7-a06f-d4e8250635a6@intel.com> (raw)
In-Reply-To: <20250731133557.GB8494@horms.kernel.org>

From: Simon Horman <horms@kernel.org>
Date: Thu, 31 Jul 2025 14:35:57 +0100

> On Wed, Jul 30, 2025 at 06:07:15PM +0200, Alexander Lobakin wrote:
>> Use libeth XDP infra to support running XDP program on Rx polling.
>> This includes all of the possible verdicts/actions.
>> XDP Tx queues are cleaned only in "lazy" mode when there are less than
>> 1/4 free descriptors left on the ring. libeth helper macros to define
>> driver-specific XDP functions make sure the compiler could uninline
>> them when needed.
>> Use __LIBETH_WORD_ACCESS to parse descriptors more efficiently when
>> applicable. It really gives some good boosts and code size reduction
>> on x86_64.
>>
>> Co-developed-by: Michal Kubiak <michal.kubiak@intel.com>
>> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> 
> ...
> 
>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> 
> ...
> 
>> @@ -3127,14 +3125,12 @@ static bool idpf_rx_process_skb_fields(struct sk_buff *skb,
>>  	return !__idpf_rx_process_skb_fields(rxq, skb, xdp->desc);
>>  }
>>  
>> -static void
>> -idpf_xdp_run_pass(struct libeth_xdp_buff *xdp, struct napi_struct *napi,
>> -		  struct libeth_rq_napi_stats *ss,
>> -		  const struct virtchnl2_rx_flex_desc_adv_nic_3 *desc)
>> -{
>> -	libeth_xdp_run_pass(xdp, NULL, napi, ss, desc, NULL,
>> -			    idpf_rx_process_skb_fields);
>> -}
>> +LIBETH_XDP_DEFINE_START();
>> +LIBETH_XDP_DEFINE_RUN(static idpf_xdp_run_pass, idpf_xdp_run_prog,
>> +		      idpf_xdp_tx_flush_bulk, idpf_rx_process_skb_fields);
>> +LIBETH_XDP_DEFINE_FINALIZE(static idpf_xdp_finalize_rx, idpf_xdp_tx_flush_bulk,
>> +			   idpf_xdp_tx_finalize);
>> +LIBETH_XDP_DEFINE_END();
>>  
>>  /**
>>   * idpf_rx_hsplit_wa - handle header buffer overflows and split errors
>> @@ -3222,7 +3218,10 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>>  	struct libeth_rq_napi_stats rs = { };
>>  	u16 ntc = rxq->next_to_clean;
>>  	LIBETH_XDP_ONSTACK_BUFF(xdp);
>> +	LIBETH_XDP_ONSTACK_BULK(bq);
>>  
>> +	libeth_xdp_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev,
>> +				rxq->xdpsqs, rxq->num_xdp_txq);
>>  	libeth_xdp_init_buff(xdp, &rxq->xdp, &rxq->xdp_rxq);
>>  
>>  	/* Process Rx packets bounded by budget */
>> @@ -3318,11 +3317,13 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>>  		if (!idpf_rx_splitq_is_eop(rx_desc) || unlikely(!xdp->data))
>>  			continue;
>>  
>> -		idpf_xdp_run_pass(xdp, rxq->napi, &rs, rx_desc);
>> +		idpf_xdp_run_pass(xdp, &bq, rxq->napi, &rs, rx_desc);
>>  	}
>>  
>>  	rxq->next_to_clean = ntc;
>> +
>>  	libeth_xdp_save_buff(&rxq->xdp, xdp);
>> +	idpf_xdp_finalize_rx(&bq);
> 
> This will call __libeth_xdp_finalize_rx(), which calls rcu_read_unlock().
> But there doesn't seem to be a corresponding call to rcu_read_lock()
> 
> Flagged by Sparse.

It's false-positive, rcu_read_lock() is called in tx_init_bulk().

> 
>>  
>>  	u64_stats_update_begin(&rxq->stats_sync);
>>  	u64_stats_add(&rxq->q_stats.packets, rs.packets);

Thanks,
Olek

  reply	other threads:[~2025-08-01 13:12 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-30 16:06 [PATCH iwl-next v3 00/18] idpf: add XDP support Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 01/18] idpf: add support for Tx refillqs in flow scheduling mode Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 02/18] idpf: improve when to set RE bit logic Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 03/18] idpf: simplify and fix splitq Tx packet rollback error path Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 04/18] idpf: replace flow scheduling buffer ring with buffer pool Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 05/18] idpf: stop Tx if there are insufficient buffer resources Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 06/18] idpf: remove obsolete stashing code Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 07/18] idpf: fix Rx descriptor ready check barrier in splitq Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 08/18] idpf: use a saner limit for default number of queues to allocate Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 09/18] idpf: link NAPIs to queues Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 10/18] idpf: add 4-byte completion descriptor definition Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 11/18] idpf: remove SW marker handling from NAPI Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 12/18] idpf: add support for nointerrupt queues Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 13/18] idpf: prepare structures to support XDP Alexander Lobakin
2025-08-01 22:30   ` Jakub Kicinski
2025-08-05 16:06     ` Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 14/18] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 15/18] idpf: use generic functions to build xdp_buff and skb Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 16/18] idpf: add support for XDP on Rx Alexander Lobakin
2025-07-31 12:37   ` Simon Horman
2025-07-31 17:05     ` Kees Cook
2025-08-01 13:12       ` Alexander Lobakin
2025-08-01 13:17         ` Alexander Lobakin
2025-08-02 18:52           ` Kees Cook
2025-08-05  9:40             ` Simon Horman
2025-07-31 13:35   ` Simon Horman
2025-08-01 13:11     ` Alexander Lobakin [this message]
2025-08-01 22:33   ` Jakub Kicinski
2025-08-05 16:09     ` Alexander Lobakin
2025-08-05 22:46       ` Jakub Kicinski
2025-07-30 16:07 ` [PATCH iwl-next v3 17/18] idpf: add support for .ndo_xdp_xmit() Alexander Lobakin
2025-07-30 16:07 ` [PATCH iwl-next v3 18/18] idpf: add XDP RSS hash hint Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9538e649-0e9c-45b7-a06f-d4e8250635a6@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nxne.cnse.osdt.itp.upstreaming@intel.com \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).