netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: <intel-wired-lan@lists.osuosl.org>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	"Przemek Kitszel" <przemyslaw.kitszel@intel.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Simon Horman <horms@kernel.org>, <bpf@vger.kernel.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 11/16] idpf: prepare structures to support XDP
Date: Mon, 17 Mar 2025 15:50:11 +0100	[thread overview]
Message-ID: <a49604cf-0181-4e91-920d-206a799f67f2@intel.com> (raw)
In-Reply-To: <Z8r0MagKeUNHwfQk@boxer>

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Date: Fri, 7 Mar 2025 14:27:13 +0100

> On Wed, Mar 05, 2025 at 05:21:27PM +0100, Alexander Lobakin wrote:
>> From: Michal Kubiak <michal.kubiak@intel.com>
>>
>> Extend basic structures of the driver (e.g. 'idpf_vport', 'idpf_*_queue',
>> 'idpf_vport_user_config_data') by adding members necessary to support XDP.
>> Add extra XDP Tx queues needed to support XDP_TX and XDP_REDIRECT actions
>> without interfering with regular Tx traffic.
>> Also add functions dedicated to support XDP initialization for Rx and
>> Tx queues and call those functions from the existing algorithms of
>> queues configuration.

[...]

>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
>> index 59b1a1a09996..1ca322bfe92f 100644
>> --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
>> +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
>> @@ -186,9 +186,11 @@ static void idpf_get_channels(struct net_device *netdev,
>>  {
>>  	struct idpf_netdev_priv *np = netdev_priv(netdev);
>>  	struct idpf_vport_config *vport_config;
>> +	const struct idpf_vport *vport;
>>  	u16 num_txq, num_rxq;
>>  	u16 combined;
>>  
>> +	vport = idpf_netdev_to_vport(netdev);
>>  	vport_config = np->adapter->vport_config[np->vport_idx];
>>  
>>  	num_txq = vport_config->user_config.num_req_tx_qs;
>> @@ -202,8 +204,8 @@ static void idpf_get_channels(struct net_device *netdev,
>>  	ch->max_rx = vport_config->max_q.max_rxq;
>>  	ch->max_tx = vport_config->max_q.max_txq;
>>  
>> -	ch->max_other = IDPF_MAX_MBXQ;
>> -	ch->other_count = IDPF_MAX_MBXQ;
>> +	ch->max_other = IDPF_MAX_MBXQ + vport->num_xdp_txq;
>> +	ch->other_count = IDPF_MAX_MBXQ + vport->num_xdp_txq;
> 
> That's new I think. Do you explain somewhere that other `other` will carry
> xdpq count? Otherwise how would I know to interpret this value?

Where? :D

> 
> Also from what I see num_txq carries (txq + xdpq) count. How is that
> affecting the `combined` from ethtool_channels?

No changes in combined/Ethtool, num_txq is not used there. Stuff like
req_txq_num includes skb queues only.

> 
>>  
>>  	ch->combined_count = combined;
>>  	ch->rx_count = num_rxq - combined;
>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
>> index 2594ca38e8ca..0f4edc9cd1ad 100644
> 
> (...)
> 
>> +
>> +/**
>> + * __idpf_xdp_rxq_info_init - Setup XDP RxQ info for a given Rx queue
>> + * @rxq: Rx queue for which the resources are setup
>> + * @arg: flag indicating if the HW works in split queue mode
>> + *
>> + * Return: 0 on success, negative on failure.
>> + */
>> +static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
>> +{
>> +	const struct idpf_vport *vport = rxq->q_vector->vport;
>> +	bool split = idpf_is_queue_model_split(vport->rxq_model);
>> +	const struct page_pool *pp;
>> +	int err;
>> +
>> +	err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx,
>> +				 rxq->q_vector->napi.napi_id,
>> +				 rxq->rx_buf_size);
>> +	if (err)
>> +		return err;
>> +
>> +	pp = split ? rxq->bufq_sets[0].bufq.pp : rxq->pp;
>> +	xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp);
>> +
>> +	if (!split)
>> +		return 0;
> 
> why do you care about splitq model if on next patch you don't allow
> XDP_SETUP_PROG for that?

This function is called unconditionally for both queue models. If we
don't account it here, we'd break regular traffic flow.

(singleq will be removed soon, don't take it seriously anyway)

[...]

>> +int idpf_vport_xdpq_get(const struct idpf_vport *vport)
>> +{
>> +	struct libeth_xdpsq_timer **timers __free(kvfree) = NULL;
> 
> please bear with me here - so this array will exist as long as there is a
> single timers[i] allocated? even though it's a local var?

No problem.

No, this array will be freed when the function exits. This array is an
array of pointers to iterate in a loop and assign timers to queues. When
we exit this function, it's no longer needed.
I can't place the whole array on the stack since I don't know the actual
queue count + it can be really big (1024 pointers * 8 = 8 Kb, even 128
or 256 queues is already 1-2 Kb).

The actual timers are allocated separately and NUMA-locally below.

> 
> this way you avoid the need to store it in vport?
> 
>> +	struct net_device *dev;
>> +	u32 sqs;
>> +
>> +	if (!idpf_xdp_is_prog_ena(vport))
>> +		return 0;
>> +
>> +	timers = kvcalloc(vport->num_xdp_txq, sizeof(*timers), GFP_KERNEL);
>> +	if (!timers)
>> +		return -ENOMEM;
>> +
>> +	for (u32 i = 0; i < vport->num_xdp_txq; i++) {
>> +		timers[i] = kzalloc_node(sizeof(*timers[i]), GFP_KERNEL,
>> +					 cpu_to_mem(i));
>> +		if (!timers[i]) {
>> +			for (int j = i - 1; j >= 0; j--)
>> +				kfree(timers[j]);
>> +
>> +			return -ENOMEM;
>> +		}
>> +	}
>> +
>> +	dev = vport->netdev;
>> +	sqs = vport->xdp_txq_offset;
>> +
>> +	for (u32 i = sqs; i < vport->num_txq; i++) {
>> +		struct idpf_tx_queue *xdpq = vport->txqs[i];
>> +
>> +		xdpq->complq = xdpq->txq_grp->complq;
>> +
>> +		idpf_queue_clear(FLOW_SCH_EN, xdpq);
>> +		idpf_queue_clear(FLOW_SCH_EN, xdpq->complq);
>> +		idpf_queue_set(NOIRQ, xdpq);
>> +		idpf_queue_set(XDP, xdpq);
>> +		idpf_queue_set(XDP, xdpq->complq);
>> +
>> +		xdpq->timer = timers[i - sqs];
>> +		libeth_xdpsq_get(&xdpq->xdp_lock, dev, vport->xdpq_share);
>> +
>> +		xdpq->pending = 0;
>> +		xdpq->xdp_tx = 0;
>> +		xdpq->thresh = libeth_xdp_queue_threshold(xdpq->desc_count);
>> +	}
>> +
>> +	return 0;
>> +}

Thanks,
Olek

  reply	other threads:[~2025-03-17 14:50 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-05 16:21 [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 01/16] libeth: convert to netmem Alexander Lobakin
2025-03-06  0:13   ` Mina Almasry
2025-03-11 17:22     ` Alexander Lobakin
2025-03-11 17:43       ` Mina Almasry
2025-03-05 16:21 ` [PATCH net-next 02/16] libeth: support native XDP and register memory model Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 03/16] libeth: add a couple of XDP helpers (libeth_xdp) Alexander Lobakin
2025-03-11 14:05   ` Maciej Fijalkowski
2025-03-17 15:26     ` Alexander Lobakin
2025-03-19 16:19       ` Maciej Fijalkowski
2025-04-01 13:11         ` Alexander Lobakin
2025-04-08 13:38           ` Alexander Lobakin
2025-04-08 13:22     ` Alexander Lobakin
2025-04-08 13:51       ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 04/16] libeth: add XSk helpers Alexander Lobakin
2025-03-07 10:15   ` Maciej Fijalkowski
2025-03-12 17:03     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 05/16] idpf: fix Rx descriptor ready check barrier in splitq Alexander Lobakin
2025-03-07 10:17   ` Maciej Fijalkowski
2025-03-12 17:10     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 06/16] idpf: a use saner limit for default number of queues to allocate Alexander Lobakin
2025-03-07 10:32   ` Maciej Fijalkowski
2025-03-12 17:22     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 07/16] idpf: link NAPIs to queues Alexander Lobakin
2025-03-07 10:28   ` Eric Dumazet
2025-03-12 17:16     ` Alexander Lobakin
2025-03-18 17:10       ` Alexander Lobakin
2025-03-07 10:51   ` Maciej Fijalkowski
2025-03-12 17:25     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 08/16] idpf: make complq cleaning dependent on scheduling mode Alexander Lobakin
2025-03-07 11:11   ` Maciej Fijalkowski
2025-03-13 16:16     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 09/16] idpf: remove SW marker handling from NAPI Alexander Lobakin
2025-03-07 11:42   ` Maciej Fijalkowski
2025-03-13 16:50     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 10/16] idpf: add support for nointerrupt queues Alexander Lobakin
2025-03-07 12:10   ` Maciej Fijalkowski
2025-03-13 16:19     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 11/16] idpf: prepare structures to support XDP Alexander Lobakin
2025-03-07  1:12   ` Jakub Kicinski
2025-03-12 14:00     ` [Intel-wired-lan] " Alexander Lobakin
2025-03-07 13:27   ` Maciej Fijalkowski
2025-03-17 14:50     ` Alexander Lobakin [this message]
2025-03-19 16:29       ` Maciej Fijalkowski
2025-04-08 13:42         ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 12/16] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Alexander Lobakin
2025-03-07 14:16   ` Maciej Fijalkowski
2025-03-17 14:58     ` Alexander Lobakin
2025-03-19 16:23       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 13/16] idpf: use generic functions to build xdp_buff and skb Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 14/16] idpf: add support for XDP on Rx Alexander Lobakin
2025-03-11 15:50   ` Maciej Fijalkowski
2025-04-08 13:28     ` Alexander Lobakin
2025-04-08 15:53       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 15/16] idpf: add support for .ndo_xdp_xmit() Alexander Lobakin
2025-03-11 16:08   ` Maciej Fijalkowski
2025-04-08 13:31     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 16/16] idpf: add XDP RSS hash hint Alexander Lobakin
2025-03-11 15:28 ` [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a49604cf-0181-4e91-920d-206a799f67f2@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).