From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: <intel-wired-lan@lists.osuosl.org>,
Michal Kubiak <michal.kubiak@intel.com>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
"Przemek Kitszel" <przemyslaw.kitszel@intel.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
"Alexei Starovoitov" <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
Simon Horman <horms@kernel.org>, <bpf@vger.kernel.org>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 08/16] idpf: make complq cleaning dependent on scheduling mode
Date: Fri, 7 Mar 2025 12:11:05 +0100 [thread overview]
Message-ID: <Z8rUSb+XocCGHSrt@boxer> (raw)
In-Reply-To: <20250305162132.1106080-9-aleksander.lobakin@intel.com>
On Wed, Mar 05, 2025 at 05:21:24PM +0100, Alexander Lobakin wrote:
> From: Michal Kubiak <michal.kubiak@intel.com>
>
> Extend completion queue cleaning function to support queue-based
> scheduling mode needed for XDP queues.
> Add 4-byte descriptor for queue-based scheduling mode and
> perform some refactoring to extract the common code for
> both scheduling modes.
>
> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> .../net/ethernet/intel/idpf/idpf_lan_txrx.h | 6 +-
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 11 +-
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 256 +++++++++++-------
> 3 files changed, 177 insertions(+), 96 deletions(-)
some comments inline, i didn't trim though.
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
> index 8c7f8ef8f1a1..7f12c7f2e70e 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
> @@ -186,13 +186,17 @@ struct idpf_base_tx_desc {
> __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
> }; /* read used with buffer queues */
>
> -struct idpf_splitq_tx_compl_desc {
> +struct idpf_splitq_4b_tx_compl_desc {
> /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
> __le16 qid_comptype_gen;
> union {
> __le16 q_head; /* Queue head */
> __le16 compl_tag; /* Completion tag */
> } q_head_compl_tag;
> +}; /* writeback used with completion queues */
> +
> +struct idpf_splitq_tx_compl_desc {
> + struct idpf_splitq_4b_tx_compl_desc common;
> u8 ts[3];
> u8 rsvd; /* Reserved */
> }; /* writeback used with completion queues */
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index b029f566e57c..9f938301b2c5 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -743,7 +743,9 @@ libeth_cacheline_set_assert(struct idpf_buf_queue, 64, 24, 32);
>
> /**
> * struct idpf_compl_queue - software structure representing a completion queue
> - * @comp: completion descriptor array
> + * @comp: 8-byte completion descriptor array
> + * @comp_4b: 4-byte completion descriptor array
> + * @desc_ring: virtual descriptor ring address
> * @txq_grp: See struct idpf_txq_group
> * @flags: See enum idpf_queue_flags_t
> * @desc_count: Number of descriptors
> @@ -763,7 +765,12 @@ libeth_cacheline_set_assert(struct idpf_buf_queue, 64, 24, 32);
> */
> struct idpf_compl_queue {
> __cacheline_group_begin_aligned(read_mostly);
> - struct idpf_splitq_tx_compl_desc *comp;
> + union {
> + struct idpf_splitq_tx_compl_desc *comp;
> + struct idpf_splitq_4b_tx_compl_desc *comp_4b;
> +
> + void *desc_ring;
> + };
> struct idpf_txq_group *txq_grp;
>
> DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index a3f6e8cff7a0..a240ed115e3e 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -156,8 +156,8 @@ static void idpf_compl_desc_rel(struct idpf_compl_queue *complq)
> return;
>
> dma_free_coherent(complq->netdev->dev.parent, complq->size,
> - complq->comp, complq->dma);
> - complq->comp = NULL;
> + complq->desc_ring, complq->dma);
> + complq->desc_ring = NULL;
> complq->next_to_use = 0;
> complq->next_to_clean = 0;
> }
> @@ -284,12 +284,16 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
> static int idpf_compl_desc_alloc(const struct idpf_vport *vport,
> struct idpf_compl_queue *complq)
> {
> - complq->size = array_size(complq->desc_count, sizeof(*complq->comp));
> + u32 desc_size;
>
> - complq->comp = dma_alloc_coherent(complq->netdev->dev.parent,
> - complq->size, &complq->dma,
> - GFP_KERNEL);
> - if (!complq->comp)
> + desc_size = idpf_queue_has(FLOW_SCH_EN, complq) ?
> + sizeof(*complq->comp) : sizeof(*complq->comp_4b);
> + complq->size = array_size(complq->desc_count, desc_size);
> +
> + complq->desc_ring = dma_alloc_coherent(complq->netdev->dev.parent,
> + complq->size, &complq->dma,
> + GFP_KERNEL);
> + if (!complq->desc_ring)
> return -ENOMEM;
>
> complq->next_to_use = 0;
> @@ -1921,8 +1925,46 @@ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
> }
>
> /**
> - * idpf_tx_handle_rs_completion - clean a single packet and all of its buffers
> - * whether on the buffer ring or in the hash table
> + * idpf_parse_compl_desc - Parse the completion descriptor
> + * @desc: completion descriptor to be parsed
> + * @complq: completion queue containing the descriptor
> + * @txq: returns corresponding Tx queue for a given descriptor
> + * @gen_flag: current generation flag in the completion queue
> + *
> + * Return: completion type from descriptor or negative value in case of error:
> + * -ENODATA if there is no completion descriptor to be cleaned,
> + * -EINVAL if no Tx queue has been found for the completion queue.
> + */
> +static int
> +idpf_parse_compl_desc(const struct idpf_splitq_4b_tx_compl_desc *desc,
> + const struct idpf_compl_queue *complq,
> + struct idpf_tx_queue **txq, bool gen_flag)
> +{
> + struct idpf_tx_queue *target;
> + u32 rel_tx_qid, comptype;
> +
> + /* if the descriptor isn't done, no work yet to do */
> + comptype = le16_to_cpu(desc->qid_comptype_gen);
> + if (!!(comptype & IDPF_TXD_COMPLQ_GEN_M) != gen_flag)
> + return -ENODATA;
> +
> + /* Find necessary info of TX queue to clean buffers */
> + rel_tx_qid = FIELD_GET(IDPF_TXD_COMPLQ_QID_M, comptype);
> + target = likely(rel_tx_qid < complq->txq_grp->num_txq) ?
> + complq->txq_grp->txqs[rel_tx_qid] : NULL;
> +
> + if (!target)
> + return -EINVAL;
> +
> + *txq = target;
> +
> + /* Determine completion type */
> + return FIELD_GET(IDPF_TXD_COMPLQ_COMPL_TYPE_M, comptype);
> +}
> +
> +/**
> + * idpf_tx_handle_rs_cmpl_qb - clean a single packet and all of its buffers
> + * whether the Tx queue is working in queue-based scheduling
> * @txq: Tx ring to clean
> * @desc: pointer to completion queue descriptor to extract completion
> * information from
> @@ -1931,21 +1973,33 @@ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
> *
> * Returns bytes/packets cleaned
> */
> -static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
> - struct idpf_splitq_tx_compl_desc *desc,
> - struct libeth_sq_napi_stats *cleaned,
> - int budget)
> +static void
> +idpf_tx_handle_rs_cmpl_qb(struct idpf_tx_queue *txq,
> + const struct idpf_splitq_4b_tx_compl_desc *desc,
> + struct libeth_sq_napi_stats *cleaned, int budget)
> {
> - u16 compl_tag;
> + u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head);
>
> - if (!idpf_queue_has(FLOW_SCH_EN, txq)) {
> - u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head);
> -
> - idpf_tx_splitq_clean(txq, head, budget, cleaned, false);
> - return;
> - }
> + idpf_tx_splitq_clean(txq, head, budget, cleaned, false);
> +}
>
> - compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
> +/**
> + * idpf_tx_handle_rs_cmpl_fb - clean a single packet and all of its buffers
> + * whether on the buffer ring or in the hash table (flow-based scheduling only)
> + * @txq: Tx ring to clean
> + * @desc: pointer to completion queue descriptor to extract completion
> + * information from
> + * @cleaned: pointer to stats struct to track cleaned packets/bytes
> + * @budget: Used to determine if we are in netpoll
> + *
> + * Returns bytes/packets cleaned
> + */
> +static void
> +idpf_tx_handle_rs_cmpl_fb(struct idpf_tx_queue *txq,
> + const struct idpf_splitq_4b_tx_compl_desc *desc,
> + struct libeth_sq_napi_stats *cleaned, int budget)
> +{
> + u16 compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
>
> /* If we didn't clean anything on the ring, this packet must be
> * in the hash table. Go clean it there.
> @@ -1954,6 +2008,61 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
> idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget);
> }
>
> +/**
> + * idpf_tx_finalize_complq - Finalize completion queue cleaning
> + * @complq: completion queue to finalize
> + * @ntc: next to complete index
> + * @gen_flag: current state of generation flag
> + * @cleaned: returns number of packets cleaned
> + */
> +static void idpf_tx_finalize_complq(struct idpf_compl_queue *complq, int ntc,
> + bool gen_flag, int *cleaned)
> +{
> + struct idpf_netdev_priv *np;
> + bool complq_ok = true;
> + int i;
> +
> + /* Store the state of the complq to be used later in deciding if a
> + * TXQ can be started again
> + */
> + if (unlikely(IDPF_TX_COMPLQ_PENDING(complq->txq_grp) >
> + IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq)))
> + complq_ok = false;
> +
> + np = netdev_priv(complq->netdev);
> + for (i = 0; i < complq->txq_grp->num_txq; ++i) {
All of your new code tends to scope the iterators within loop, would be
good to stay consistent maybe?
Also, looks like
struct idpf_txq_group *txq_grp = complq->txq_grp;
would be handy in this function.
> + struct idpf_tx_queue *tx_q = complq->txq_grp->txqs[i];
> + struct netdev_queue *nq;
> + bool dont_wake;
> +
> + /* We didn't clean anything on this queue, move along */
> + if (!tx_q->cleaned_bytes)
> + continue;
> +
> + *cleaned += tx_q->cleaned_pkts;
> +
> + /* Update BQL */
> + nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
> +
> + dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) ||
> + np->state != __IDPF_VPORT_UP ||
> + !netif_carrier_ok(tx_q->netdev);
> + /* Check if the TXQ needs to and can be restarted */
> + __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes,
> + IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
> + dont_wake);
> +
> + /* Reset cleaned stats for the next time this queue is
> + * cleaned
> + */
> + tx_q->cleaned_bytes = 0;
> + tx_q->cleaned_pkts = 0;
> + }
> +
> + complq->next_to_clean = ntc + complq->desc_count;
don't you have to handle the >= count case?
> + idpf_queue_assign(GEN_CHK, complq, gen_flag);
> +}
> +
> /**
> * idpf_tx_clean_complq - Reclaim resources on completion queue
> * @complq: Tx ring to clean
> @@ -1965,60 +2074,56 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
> static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget,
> int *cleaned)
> {
> - struct idpf_splitq_tx_compl_desc *tx_desc;
> + struct idpf_splitq_4b_tx_compl_desc *tx_desc;
> s16 ntc = complq->next_to_clean;
> - struct idpf_netdev_priv *np;
> unsigned int complq_budget;
> - bool complq_ok = true;
> - int i;
> + bool flow, gen_flag;
> + u32 pos = ntc;
> +
> + flow = idpf_queue_has(FLOW_SCH_EN, complq);
> + gen_flag = idpf_queue_has(GEN_CHK, complq);
>
> complq_budget = complq->clean_budget;
> - tx_desc = &complq->comp[ntc];
> + tx_desc = flow ? &complq->comp[pos].common : &complq->comp_4b[pos];
> ntc -= complq->desc_count;
>
> do {
> struct libeth_sq_napi_stats cleaned_stats = { };
> struct idpf_tx_queue *tx_q;
> - int rel_tx_qid;
> u16 hw_head;
> - u8 ctype; /* completion type */
> - u16 gen;
> -
> - /* if the descriptor isn't done, no work yet to do */
> - gen = le16_get_bits(tx_desc->qid_comptype_gen,
> - IDPF_TXD_COMPLQ_GEN_M);
> - if (idpf_queue_has(GEN_CHK, complq) != gen)
> - break;
> -
> - /* Find necessary info of TX queue to clean buffers */
> - rel_tx_qid = le16_get_bits(tx_desc->qid_comptype_gen,
> - IDPF_TXD_COMPLQ_QID_M);
> - if (rel_tx_qid >= complq->txq_grp->num_txq ||
> - !complq->txq_grp->txqs[rel_tx_qid]) {
> - netdev_err(complq->netdev, "TxQ not found\n");
> - goto fetch_next_desc;
> - }
> - tx_q = complq->txq_grp->txqs[rel_tx_qid];
> + int ctype;
>
> - /* Determine completion type */
> - ctype = le16_get_bits(tx_desc->qid_comptype_gen,
> - IDPF_TXD_COMPLQ_COMPL_TYPE_M);
> + ctype = idpf_parse_compl_desc(tx_desc, complq, &tx_q,
> + gen_flag);
> switch (ctype) {
> case IDPF_TXD_COMPLT_RE:
> + if (unlikely(!flow))
> + goto fetch_next_desc;
> +
> hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head);
>
> idpf_tx_splitq_clean(tx_q, hw_head, budget,
> &cleaned_stats, true);
> break;
> case IDPF_TXD_COMPLT_RS:
> - idpf_tx_handle_rs_completion(tx_q, tx_desc,
> - &cleaned_stats, budget);
> + if (flow)
> + idpf_tx_handle_rs_cmpl_fb(tx_q, tx_desc,
> + &cleaned_stats,
> + budget);
> + else
> + idpf_tx_handle_rs_cmpl_qb(tx_q, tx_desc,
I'd rather have 'queue' and 'flow' spelled out in these functions, they
differ by single char and take the same args on input so it's an eye
exercise to follow this. However, nothing better comes to my mind now.
> + &cleaned_stats,
> + budget);
> break;
> case IDPF_TXD_COMPLT_SW_MARKER:
> idpf_tx_handle_sw_marker(tx_q);
> break;
> + case -ENODATA:
> + goto exit_clean_complq;
> + case -EINVAL:
> + goto fetch_next_desc;
> default:
> - netdev_err(tx_q->netdev,
> + netdev_err(complq->netdev,
> "Unknown TX completion type: %d\n", ctype);
> goto fetch_next_desc;
> }
> @@ -2032,59 +2137,24 @@ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget,
> u64_stats_update_end(&tx_q->stats_sync);
>
> fetch_next_desc:
> - tx_desc++;
> + pos++;
> ntc++;
> if (unlikely(!ntc)) {
> ntc -= complq->desc_count;
> - tx_desc = &complq->comp[0];
> - idpf_queue_change(GEN_CHK, complq);
> + pos = 0;
> + gen_flag = !gen_flag;
> }
>
> + tx_desc = flow ? &complq->comp[pos].common :
> + &complq->comp_4b[pos];
> prefetch(tx_desc);
>
> /* update budget accounting */
> complq_budget--;
> } while (likely(complq_budget));
>
> - /* Store the state of the complq to be used later in deciding if a
> - * TXQ can be started again
> - */
> - if (unlikely(IDPF_TX_COMPLQ_PENDING(complq->txq_grp) >
> - IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq)))
> - complq_ok = false;
> -
> - np = netdev_priv(complq->netdev);
> - for (i = 0; i < complq->txq_grp->num_txq; ++i) {
> - struct idpf_tx_queue *tx_q = complq->txq_grp->txqs[i];
> - struct netdev_queue *nq;
> - bool dont_wake;
> -
> - /* We didn't clean anything on this queue, move along */
> - if (!tx_q->cleaned_bytes)
> - continue;
> -
> - *cleaned += tx_q->cleaned_pkts;
> -
> - /* Update BQL */
> - nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
> -
> - dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) ||
> - np->state != __IDPF_VPORT_UP ||
> - !netif_carrier_ok(tx_q->netdev);
> - /* Check if the TXQ needs to and can be restarted */
> - __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes,
> - IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
> - dont_wake);
> -
> - /* Reset cleaned stats for the next time this queue is
> - * cleaned
> - */
> - tx_q->cleaned_bytes = 0;
> - tx_q->cleaned_pkts = 0;
> - }
> -
> - ntc += complq->desc_count;
> - complq->next_to_clean = ntc;
> +exit_clean_complq:
> + idpf_tx_finalize_complq(complq, ntc, gen_flag, cleaned);
>
> return !!complq_budget;
> }
> --
> 2.48.1
>
next prev parent reply other threads:[~2025-03-07 11:11 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-05 16:21 [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 01/16] libeth: convert to netmem Alexander Lobakin
2025-03-06 0:13 ` Mina Almasry
2025-03-11 17:22 ` Alexander Lobakin
2025-03-11 17:43 ` Mina Almasry
2025-03-05 16:21 ` [PATCH net-next 02/16] libeth: support native XDP and register memory model Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 03/16] libeth: add a couple of XDP helpers (libeth_xdp) Alexander Lobakin
2025-03-11 14:05 ` Maciej Fijalkowski
2025-03-17 15:26 ` Alexander Lobakin
2025-03-19 16:19 ` Maciej Fijalkowski
2025-04-01 13:11 ` Alexander Lobakin
2025-04-08 13:38 ` Alexander Lobakin
2025-04-08 13:22 ` Alexander Lobakin
2025-04-08 13:51 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 04/16] libeth: add XSk helpers Alexander Lobakin
2025-03-07 10:15 ` Maciej Fijalkowski
2025-03-12 17:03 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 05/16] idpf: fix Rx descriptor ready check barrier in splitq Alexander Lobakin
2025-03-07 10:17 ` Maciej Fijalkowski
2025-03-12 17:10 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 06/16] idpf: a use saner limit for default number of queues to allocate Alexander Lobakin
2025-03-07 10:32 ` Maciej Fijalkowski
2025-03-12 17:22 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 07/16] idpf: link NAPIs to queues Alexander Lobakin
2025-03-07 10:28 ` Eric Dumazet
2025-03-12 17:16 ` Alexander Lobakin
2025-03-18 17:10 ` Alexander Lobakin
2025-03-07 10:51 ` Maciej Fijalkowski
2025-03-12 17:25 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 08/16] idpf: make complq cleaning dependent on scheduling mode Alexander Lobakin
2025-03-07 11:11 ` Maciej Fijalkowski [this message]
2025-03-13 16:16 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 09/16] idpf: remove SW marker handling from NAPI Alexander Lobakin
2025-03-07 11:42 ` Maciej Fijalkowski
2025-03-13 16:50 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 10/16] idpf: add support for nointerrupt queues Alexander Lobakin
2025-03-07 12:10 ` Maciej Fijalkowski
2025-03-13 16:19 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 11/16] idpf: prepare structures to support XDP Alexander Lobakin
2025-03-07 1:12 ` Jakub Kicinski
2025-03-12 14:00 ` [Intel-wired-lan] " Alexander Lobakin
2025-03-07 13:27 ` Maciej Fijalkowski
2025-03-17 14:50 ` Alexander Lobakin
2025-03-19 16:29 ` Maciej Fijalkowski
2025-04-08 13:42 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 12/16] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Alexander Lobakin
2025-03-07 14:16 ` Maciej Fijalkowski
2025-03-17 14:58 ` Alexander Lobakin
2025-03-19 16:23 ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 13/16] idpf: use generic functions to build xdp_buff and skb Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 14/16] idpf: add support for XDP on Rx Alexander Lobakin
2025-03-11 15:50 ` Maciej Fijalkowski
2025-04-08 13:28 ` Alexander Lobakin
2025-04-08 15:53 ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 15/16] idpf: add support for .ndo_xdp_xmit() Alexander Lobakin
2025-03-11 16:08 ` Maciej Fijalkowski
2025-04-08 13:31 ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 16/16] idpf: add XDP RSS hash hint Alexander Lobakin
2025-03-11 15:28 ` [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z8rUSb+XocCGHSrt@boxer \
--to=maciej.fijalkowski@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).