netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: <intel-wired-lan@lists.osuosl.org>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	"Przemek Kitszel" <przemyslaw.kitszel@intel.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Simon Horman <horms@kernel.org>, <bpf@vger.kernel.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 11/16] idpf: prepare structures to support XDP
Date: Fri, 7 Mar 2025 14:27:13 +0100	[thread overview]
Message-ID: <Z8r0MagKeUNHwfQk@boxer> (raw)
In-Reply-To: <20250305162132.1106080-12-aleksander.lobakin@intel.com>

On Wed, Mar 05, 2025 at 05:21:27PM +0100, Alexander Lobakin wrote:
> From: Michal Kubiak <michal.kubiak@intel.com>
> 
> Extend basic structures of the driver (e.g. 'idpf_vport', 'idpf_*_queue',
> 'idpf_vport_user_config_data') by adding members necessary to support XDP.
> Add extra XDP Tx queues needed to support XDP_TX and XDP_REDIRECT actions
> without interfering with regular Tx traffic.
> Also add functions dedicated to support XDP initialization for Rx and
> Tx queues and call those functions from the existing algorithms of
> queues configuration.
> 
> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
> Co-developed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
>  drivers/net/ethernet/intel/idpf/Kconfig       |   2 +-
>  drivers/net/ethernet/intel/idpf/Makefile      |   2 +
>  drivers/net/ethernet/intel/idpf/idpf.h        |  20 ++
>  drivers/net/ethernet/intel/idpf/idpf_txrx.h   |  86 ++++++--
>  drivers/net/ethernet/intel/idpf/xdp.h         |  17 ++
>  .../net/ethernet/intel/idpf/idpf_ethtool.c    |   6 +-
>  drivers/net/ethernet/intel/idpf/idpf_lib.c    |  21 +-
>  drivers/net/ethernet/intel/idpf/idpf_main.c   |   1 +
>  .../ethernet/intel/idpf/idpf_singleq_txrx.c   |   8 +-
>  drivers/net/ethernet/intel/idpf/idpf_txrx.c   | 109 +++++++---
>  .../net/ethernet/intel/idpf/idpf_virtchnl.c   |  26 +--
>  drivers/net/ethernet/intel/idpf/xdp.c         | 189 ++++++++++++++++++
>  12 files changed, 415 insertions(+), 72 deletions(-)
>  create mode 100644 drivers/net/ethernet/intel/idpf/xdp.h
>  create mode 100644 drivers/net/ethernet/intel/idpf/xdp.c
> 
> diff --git a/drivers/net/ethernet/intel/idpf/Kconfig b/drivers/net/ethernet/intel/idpf/Kconfig
> index 1addd663acad..7207ee4dbae8 100644
> --- a/drivers/net/ethernet/intel/idpf/Kconfig
> +++ b/drivers/net/ethernet/intel/idpf/Kconfig
> @@ -5,7 +5,7 @@ config IDPF
>  	tristate "Intel(R) Infrastructure Data Path Function Support"
>  	depends on PCI_MSI
>  	select DIMLIB
> -	select LIBETH
> +	select LIBETH_XDP
>  	help
>  	  This driver supports Intel(R) Infrastructure Data Path Function
>  	  devices.
> diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile
> index 2ce01a0b5898..c58abe6f8f5d 100644
> --- a/drivers/net/ethernet/intel/idpf/Makefile
> +++ b/drivers/net/ethernet/intel/idpf/Makefile
> @@ -17,3 +17,5 @@ idpf-y := \
>  	idpf_vf_dev.o
>  
>  idpf-$(CONFIG_IDPF_SINGLEQ)	+= idpf_singleq_txrx.o
> +
> +idpf-y				+= xdp.o
> diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
> index 50dde09c525b..4847760744ff 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf.h
> @@ -257,6 +257,10 @@ struct idpf_port_stats {
>   * @txq_model: Split queue or single queue queuing model
>   * @txqs: Used only in hotpath to get to the right queue very fast
>   * @crc_enable: Enable CRC insertion offload
> + * @xdpq_share: whether XDPSQ sharing is enabled
> + * @num_xdp_txq: number of XDPSQs
> + * @xdp_txq_offset: index of the first XDPSQ (== number of regular SQs)
> + * @xdp_prog: installed XDP program
>   * @num_rxq: Number of allocated RX queues
>   * @num_bufq: Number of allocated buffer queues
>   * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors
> @@ -303,6 +307,11 @@ struct idpf_vport {
>  	struct idpf_tx_queue **txqs;
>  	bool crc_enable;
>  
> +	bool xdpq_share;
> +	u16 num_xdp_txq;
> +	u16 xdp_txq_offset;
> +	struct bpf_prog *xdp_prog;
> +
>  	u16 num_rxq;
>  	u16 num_bufq;
>  	u32 rxq_desc_count;
> @@ -380,6 +389,7 @@ struct idpf_rss_data {
>   *		      ethtool
>   * @num_req_rxq_desc: Number of user requested RX queue descriptors through
>   *		      ethtool
> + * @xdp_prog: requested XDP program to install
>   * @user_flags: User toggled config flags
>   * @mac_filter_list: List of MAC filters
>   *
> @@ -391,6 +401,7 @@ struct idpf_vport_user_config_data {
>  	u16 num_req_rx_qs;
>  	u32 num_req_txq_desc;
>  	u32 num_req_rxq_desc;
> +	struct bpf_prog *xdp_prog;
>  	DECLARE_BITMAP(user_flags, __IDPF_USER_FLAGS_NBITS);
>  	struct list_head mac_filter_list;
>  };
> @@ -604,6 +615,15 @@ static inline int idpf_is_queue_model_split(u16 q_model)
>  	       q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT;
>  }
>  
> +/**
> + * idpf_xdp_is_prog_ena - check if there is an XDP program on adapter
> + * @vport: vport to check
> + */
> +static inline bool idpf_xdp_is_prog_ena(const struct idpf_vport *vport)
> +{
> +	return vport->adapter && vport->xdp_prog;
> +}

(...)

> +
> +#endif /* _IDPF_XDP_H_ */
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
> index 59b1a1a09996..1ca322bfe92f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
> @@ -186,9 +186,11 @@ static void idpf_get_channels(struct net_device *netdev,
>  {
>  	struct idpf_netdev_priv *np = netdev_priv(netdev);
>  	struct idpf_vport_config *vport_config;
> +	const struct idpf_vport *vport;
>  	u16 num_txq, num_rxq;
>  	u16 combined;
>  
> +	vport = idpf_netdev_to_vport(netdev);
>  	vport_config = np->adapter->vport_config[np->vport_idx];
>  
>  	num_txq = vport_config->user_config.num_req_tx_qs;
> @@ -202,8 +204,8 @@ static void idpf_get_channels(struct net_device *netdev,
>  	ch->max_rx = vport_config->max_q.max_rxq;
>  	ch->max_tx = vport_config->max_q.max_txq;
>  
> -	ch->max_other = IDPF_MAX_MBXQ;
> -	ch->other_count = IDPF_MAX_MBXQ;
> +	ch->max_other = IDPF_MAX_MBXQ + vport->num_xdp_txq;
> +	ch->other_count = IDPF_MAX_MBXQ + vport->num_xdp_txq;

That's new I think. Do you explain somewhere that other `other` will carry
xdpq count? Otherwise how would I know to interpret this value?

Also from what I see num_txq carries (txq + xdpq) count. How is that
affecting the `combined` from ethtool_channels?

>  
>  	ch->combined_count = combined;
>  	ch->rx_count = num_rxq - combined;
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> index 2594ca38e8ca..0f4edc9cd1ad 100644

(...)

> +
> +/**
> + * __idpf_xdp_rxq_info_init - Setup XDP RxQ info for a given Rx queue
> + * @rxq: Rx queue for which the resources are setup
> + * @arg: flag indicating if the HW works in split queue mode
> + *
> + * Return: 0 on success, negative on failure.
> + */
> +static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
> +{
> +	const struct idpf_vport *vport = rxq->q_vector->vport;
> +	bool split = idpf_is_queue_model_split(vport->rxq_model);
> +	const struct page_pool *pp;
> +	int err;
> +
> +	err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx,
> +				 rxq->q_vector->napi.napi_id,
> +				 rxq->rx_buf_size);
> +	if (err)
> +		return err;
> +
> +	pp = split ? rxq->bufq_sets[0].bufq.pp : rxq->pp;
> +	xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp);
> +
> +	if (!split)
> +		return 0;

why do you care about splitq model if on next patch you don't allow
XDP_SETUP_PROG for that?

> +
> +	rxq->xdpqs = &vport->txqs[vport->xdp_txq_offset];
> +	rxq->num_xdp_txq = vport->num_xdp_txq;
> +
> +	return 0;
> +}
> +
> +/**
> + * idpf_xdp_rxq_info_init_all - initialize RxQ info for all Rx queues in vport
> + * @vport: vport to setup the info
> + *
> + * Return: 0 on success, negative on failure.
> + */
> +int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport)
> +{
> +	return idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_init, NULL);
> +}
> +
> +/**
> + * __idpf_xdp_rxq_info_deinit - Deinit XDP RxQ info for a given Rx queue
> + * @rxq: Rx queue for which the resources are destroyed
> + * @arg: flag indicating if the HW works in split queue mode
> + *
> + * Return: always 0.
> + */
> +static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, void *arg)
> +{
> +	if (idpf_is_queue_model_split((size_t)arg)) {
> +		rxq->xdpqs = NULL;
> +		rxq->num_xdp_txq = 0;
> +	}
> +
> +	xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq);
> +	xdp_rxq_info_unreg(&rxq->xdp_rxq);
> +
> +	return 0;
> +}
> +
> +/**
> + * idpf_xdp_rxq_info_deinit_all - deinit RxQ info for all Rx queues in vport
> + * @vport: vport to setup the info
> + */
> +void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport)
> +{
> +	idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_deinit,
> +			  (void *)(size_t)vport->rxq_model);
> +}
> +
> +int idpf_vport_xdpq_get(const struct idpf_vport *vport)
> +{
> +	struct libeth_xdpsq_timer **timers __free(kvfree) = NULL;

please bear with me here - so this array will exist as long as there is a
single timers[i] allocated? even though it's a local var?

this way you avoid the need to store it in vport?

> +	struct net_device *dev;
> +	u32 sqs;
> +
> +	if (!idpf_xdp_is_prog_ena(vport))
> +		return 0;
> +
> +	timers = kvcalloc(vport->num_xdp_txq, sizeof(*timers), GFP_KERNEL);
> +	if (!timers)
> +		return -ENOMEM;
> +
> +	for (u32 i = 0; i < vport->num_xdp_txq; i++) {
> +		timers[i] = kzalloc_node(sizeof(*timers[i]), GFP_KERNEL,
> +					 cpu_to_mem(i));
> +		if (!timers[i]) {
> +			for (int j = i - 1; j >= 0; j--)
> +				kfree(timers[j]);
> +
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	dev = vport->netdev;
> +	sqs = vport->xdp_txq_offset;
> +
> +	for (u32 i = sqs; i < vport->num_txq; i++) {
> +		struct idpf_tx_queue *xdpq = vport->txqs[i];
> +
> +		xdpq->complq = xdpq->txq_grp->complq;
> +
> +		idpf_queue_clear(FLOW_SCH_EN, xdpq);
> +		idpf_queue_clear(FLOW_SCH_EN, xdpq->complq);
> +		idpf_queue_set(NOIRQ, xdpq);
> +		idpf_queue_set(XDP, xdpq);
> +		idpf_queue_set(XDP, xdpq->complq);
> +
> +		xdpq->timer = timers[i - sqs];
> +		libeth_xdpsq_get(&xdpq->xdp_lock, dev, vport->xdpq_share);
> +
> +		xdpq->pending = 0;
> +		xdpq->xdp_tx = 0;
> +		xdpq->thresh = libeth_xdp_queue_threshold(xdpq->desc_count);
> +	}
> +
> +	return 0;
> +}
> +
> +void idpf_vport_xdpq_put(const struct idpf_vport *vport)
> +{
> +	struct net_device *dev;
> +	u32 sqs;
> +
> +	if (!idpf_xdp_is_prog_ena(vport))
> +		return;
> +
> +	dev = vport->netdev;
> +	sqs = vport->xdp_txq_offset;
> +
> +	for (u32 i = sqs; i < vport->num_txq; i++) {
> +		struct idpf_tx_queue *xdpq = vport->txqs[i];
> +
> +		if (!idpf_queue_has_clear(XDP, xdpq))
> +			continue;
> +
> +		libeth_xdpsq_put(&xdpq->xdp_lock, dev);
> +
> +		kfree(xdpq->timer);
> +		idpf_queue_clear(NOIRQ, xdpq);
> +	}
> +}
> -- 
> 2.48.1
> 

  parent reply	other threads:[~2025-03-07 13:27 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-05 16:21 [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 01/16] libeth: convert to netmem Alexander Lobakin
2025-03-06  0:13   ` Mina Almasry
2025-03-11 17:22     ` Alexander Lobakin
2025-03-11 17:43       ` Mina Almasry
2025-03-05 16:21 ` [PATCH net-next 02/16] libeth: support native XDP and register memory model Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 03/16] libeth: add a couple of XDP helpers (libeth_xdp) Alexander Lobakin
2025-03-11 14:05   ` Maciej Fijalkowski
2025-03-17 15:26     ` Alexander Lobakin
2025-03-19 16:19       ` Maciej Fijalkowski
2025-04-01 13:11         ` Alexander Lobakin
2025-04-08 13:38           ` Alexander Lobakin
2025-04-08 13:22     ` Alexander Lobakin
2025-04-08 13:51       ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 04/16] libeth: add XSk helpers Alexander Lobakin
2025-03-07 10:15   ` Maciej Fijalkowski
2025-03-12 17:03     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 05/16] idpf: fix Rx descriptor ready check barrier in splitq Alexander Lobakin
2025-03-07 10:17   ` Maciej Fijalkowski
2025-03-12 17:10     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 06/16] idpf: a use saner limit for default number of queues to allocate Alexander Lobakin
2025-03-07 10:32   ` Maciej Fijalkowski
2025-03-12 17:22     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 07/16] idpf: link NAPIs to queues Alexander Lobakin
2025-03-07 10:28   ` Eric Dumazet
2025-03-12 17:16     ` Alexander Lobakin
2025-03-18 17:10       ` Alexander Lobakin
2025-03-07 10:51   ` Maciej Fijalkowski
2025-03-12 17:25     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 08/16] idpf: make complq cleaning dependent on scheduling mode Alexander Lobakin
2025-03-07 11:11   ` Maciej Fijalkowski
2025-03-13 16:16     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 09/16] idpf: remove SW marker handling from NAPI Alexander Lobakin
2025-03-07 11:42   ` Maciej Fijalkowski
2025-03-13 16:50     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 10/16] idpf: add support for nointerrupt queues Alexander Lobakin
2025-03-07 12:10   ` Maciej Fijalkowski
2025-03-13 16:19     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 11/16] idpf: prepare structures to support XDP Alexander Lobakin
2025-03-07  1:12   ` Jakub Kicinski
2025-03-12 14:00     ` [Intel-wired-lan] " Alexander Lobakin
2025-03-07 13:27   ` Maciej Fijalkowski [this message]
2025-03-17 14:50     ` Alexander Lobakin
2025-03-19 16:29       ` Maciej Fijalkowski
2025-04-08 13:42         ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 12/16] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Alexander Lobakin
2025-03-07 14:16   ` Maciej Fijalkowski
2025-03-17 14:58     ` Alexander Lobakin
2025-03-19 16:23       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 13/16] idpf: use generic functions to build xdp_buff and skb Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 14/16] idpf: add support for XDP on Rx Alexander Lobakin
2025-03-11 15:50   ` Maciej Fijalkowski
2025-04-08 13:28     ` Alexander Lobakin
2025-04-08 15:53       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 15/16] idpf: add support for .ndo_xdp_xmit() Alexander Lobakin
2025-03-11 16:08   ` Maciej Fijalkowski
2025-04-08 13:31     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 16/16] idpf: add XDP RSS hash hint Alexander Lobakin
2025-03-11 15:28 ` [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z8r0MagKeUNHwfQk@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).