Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] [PATCH net-next 06/19] iecm: add virtchnl messages for queues
Date: Thu, 3 Feb 2022 11:08:42 +0100	[thread overview]
Message-ID: <Yfupqv3/adFjuI3G@boxer> (raw)
In-Reply-To: <CO1PR11MB518643B8F1CBFA248A9E45768F279@CO1PR11MB5186.namprd11.prod.outlook.com>

On Wed, Feb 02, 2022 at 10:48:48PM +0000, Brady, Alan wrote:
> > -----Original Message-----
> > From: Lobakin, Alexandr <alexandr.lobakin@intel.com>
> > Sent: Friday, January 28, 2022 5:03 AM
> > To: Brady, Alan <alan.brady@intel.com>
> > Cc: Lobakin, Alexandr <alexandr.lobakin@intel.com>; intel-wired-
> > lan at lists.osuosl.org; Linga, Pavan Kumar <pavan.kumar.linga@intel.com>;
> > Chittim, Madhu <madhu.chittim@intel.com>; Burra, Phani R
> > <phani.r.burra@intel.com>
> > Subject: Re: [Intel-wired-lan] [PATCH net-next 06/19] iecm: add virtchnl
> > messages for queues
> > 
> > From: Alan Brady <alan.brady@intel.com>
> > Date: Thu, 27 Jan 2022 16:09:56 -0800
> > 
> > > This continues adding virtchnl messages. This largely relates to adding
> > > messages needed to negotiate and setup traffic queues.
> > >
> > > Signed-off-by: Phani Burra <phani.r.burra@intel.com>
> > > Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> > > Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
> > > Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
> > > Signed-off-by: Alice Michael <alice.michael@intel.com>
> > > Signed-off-by: Alan Brady <alan.brady@intel.com>
> > > ---
> > >  drivers/net/ethernet/intel/iecm/iecm_lib.c    |   14 +
> > >  drivers/net/ethernet/intel/iecm/iecm_txrx.c   |  161 +++
> > >  .../net/ethernet/intel/iecm/iecm_virtchnl.c   | 1127 ++++++++++++++++-
> > >  drivers/net/ethernet/intel/include/iecm.h     |   22 +
> > >  .../net/ethernet/intel/include/iecm_txrx.h    |  196 +++
> > >  5 files changed, 1505 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c
> > b/drivers/net/ethernet/intel/iecm/iecm_lib.c
> > > index e2e523f0700e..4e9cc7f2d138 100644
> > > --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
> > > +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
> > 
> > --- 8< ---

I think we could trim a bit more in these responses, so they would be
easier to follow for people :)

> > 
> > > +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport)
> > > +{
> > > +	int num_req_txq_desc = vport->adapter-
> > >config_data.num_req_txq_desc;
> > > +	int num_req_rxq_desc = vport->adapter-
> > >config_data.num_req_rxq_desc;
> > > +	int num_bufqs = vport->num_bufqs_per_qgrp;
> > > +	int i = 0;
> > > +
> > > +	vport->complq_desc_count = 0;
> > > +	if (num_req_txq_desc) {
> > > +		vport->txq_desc_count = num_req_txq_desc;
> > > +		if (iecm_is_queue_model_split(vport->txq_model)) {
> > > +			vport->complq_desc_count = num_req_txq_desc;
> > > +			if (vport->complq_desc_count <
> > IECM_MIN_TXQ_COMPLQ_DESC)
> > > +				vport->complq_desc_count =
> > > +					IECM_MIN_TXQ_COMPLQ_DESC;
> > > +		}
> > > +	} else {
> > > +		vport->txq_desc_count =
> > > +			IECM_DFLT_TX_Q_DESC_COUNT;
> > > +		if (iecm_is_queue_model_split(vport->txq_model)) {
> > > +			vport->complq_desc_count =
> > > +				IECM_DFLT_TX_COMPLQ_DESC_COUNT;
> > > +		}
> > 
> > Braces are redundant here since the path is a one-liner.
> > 
> 
> Correct me if I'm wrong but believe the guidance here is if it goes
> beyond one line with line wrapping, it is optional whether or not to use
> braces, even if the statement is 'one line'. We have generally preferred
> to keep braces in multiline statements. However you do have a point that
> it is not consistent in this function. Will fix.

That's the first time I hear about something like that. Interesting that
checkpatch won't scream at you for this, but I'm with Alex in here.

> 
> > > +	}
> > > +
> > > +	if (num_req_rxq_desc)
> > > +		vport->rxq_desc_count = num_req_rxq_desc;
> > > +	else
> > > +		vport->rxq_desc_count = IECM_DFLT_RX_Q_DESC_COUNT;
> > > +
> > > +	for (i = 0; i < num_bufqs; i++) {
> > > +		if (!vport->bufq_desc_count[i])
> > > +			vport->bufq_desc_count[i] =
> > > +				IECM_RX_BUFQ_DESC_COUNT(vport-
> > >rxq_desc_count,
> > > +							num_bufqs);
> > 
> > 		if (vport->bufq_desc_count[i])
> > 			continue;
> > 
> > 		vport-> ...
> > 
> > -1 indent level with that.
> > 
> > > +	}
> > > +}
> > > +

(...)

> > > +/**
> > > + * iecm_send_config_rx_queues_msg - Send virtchnl config rx queues
> > message
> > > + * @vport: virtual port data structure
> > > + *
> > > + * Send config rx queues virtchnl message.  Returns 0 on success, negative on
> > > + * failure.
> > > + */
> > > +int iecm_send_config_rx_queues_msg(struct iecm_vport *vport)
> > > +{
> > > +	struct virtchnl2_config_rx_queues *crq = NULL;
> > > +	int config_data_size, chunk_size, buf_size = 0;
> > > +	int totqs, num_msgs, num_chunks;
> > > +	struct virtchnl2_rxq_info *qi;
> > > +	int err = 0, i, k = 0;
> > > +	bool alloc = false;
> > > +
> > > +	totqs = vport->num_rxq + vport->num_bufq;
> > > +	qi = kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL);
> > > +	if (!qi)
> > > +		return -ENOMEM;
> > > +
> > > +	/* Populate the queue info buffer with all queue context info */
> > > +	for (i = 0; i < vport->num_rxq_grp; i++) {
> > > +		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
> > > +		int num_rxq;
> > > +		int j;
> > > +
> > > +		if (iecm_is_queue_model_split(vport->rxq_model)) {
> > > +			for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
> > > +				struct iecm_queue *bufq =
> > > +					&rx_qgrp->splitq.bufq_sets[j].bufq;
> > > +
> > > +				qi[k].queue_id =
> > > +					cpu_to_le32(bufq->q_id);
> > > +				qi[k].model =
> > > +					cpu_to_le16(vport->rxq_model);
> > > +				qi[k].type =
> > > +					cpu_to_le32(bufq->q_type);
> > > +				qi[k].desc_ids =
> > > +
> > 	cpu_to_le64(VIRTCHNL2_RXDID_1_FLEX_SPLITQ_M);
> > > +				qi[k].ring_len =
> > > +					cpu_to_le16(bufq->desc_count);
> > > +				qi[k].dma_ring_addr =
> > > +					cpu_to_le64(bufq->dma);
> > > +				qi[k].data_buffer_size =
> > > +					cpu_to_le32(bufq->rx_buf_size);
> > > +				qi[k].buffer_notif_stride =
> > > +					bufq->rx_buf_stride;
> > > +				qi[k].rx_buffer_low_watermark =
> > > +					cpu_to_le16(bufq-
> > >rx_buffer_low_watermark);
> > > +			}
> > > +		}
> > 
> > 		if (iecm_is_queue_model_split(vport->rxq_model))
> > 			goto here;
> > 
> > -1 indent level for the for-loop.
> 
> I'm afraid I'm not following, please elaborate. Where are we goto'ing?
> The for loop below needs to be executed for both and if we just tack the
> above for loop at the bottom of the function and goto in and out of it
> to save an indent does not sound great and makes the code harder to
> follow IMO.

Below or above? I think Alex meant to skip the for loop execution for
nonsplitq model. And by reducing indent you probably could have all of the
assignments stored on the single line.

> 
> > Braces for 'if' are not needed since the for-loop has their own.
> > 
> 
> They're not required but we have generally preferred to keep braces on
> statements extending across more than one line.

Here I'm actually okay with keeping the braces as in future maybe you
might want to introduce something for the splitq case outside of the
current loop so it might get tricky when you forgot to supply the outer
'if' with braces. Just my 0.02$.

> 
> > > +
> > > +		if (iecm_is_queue_model_split(vport->rxq_model))
> > > +			num_rxq = rx_qgrp->splitq.num_rxq_sets;
> > > +		else
> > > +			num_rxq = rx_qgrp->singleq.num_rxq;
> > > +
> > > +		for (j = 0; j < num_rxq; j++, k++) {
> > > +			struct iecm_queue *rxq;
> > > +
> > > +			if (iecm_is_queue_model_split(vport->rxq_model)) {
> > > +				rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
> > > +				qi[k].rx_bufq1_id =
> > > +				  cpu_to_le16(rxq->rxq_grp-
> > >splitq.bufq_sets[0].bufq.q_id);
> > > +				qi[k].rx_bufq2_id =
> > > +				  cpu_to_le16(rxq->rxq_grp-
> > >splitq.bufq_sets[1].bufq.q_id);
> > > +				qi[k].hdr_buffer_size =
> > > +					cpu_to_le16(rxq->rx_hbuf_size);
> > > +				qi[k].rx_buffer_low_watermark =
> > > +					cpu_to_le16(rxq-
> > >rx_buffer_low_watermark);
> > > +
> > > +				if (rxq->rx_hsplit_en) {
> > > +					qi[k].qflags =
> > > +
> > 	cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT);
> > > +					qi[k].hdr_buffer_size =
> > > +						cpu_to_le16(rxq-
> > >rx_hbuf_size);
> > > +				}
> > > +			} else {
> > > +				rxq = rx_qgrp->singleq.rxqs[j];
> > > +			}
> > 
> > Same here, but with rxq = ... + goto.
> > 
> 
> Please elaborate.

		if (!iecm_is_queue_model_split(vport->rxq_model)) {
			rxq = rx_qgrp->singleq.rxqs[j];
			goto skip_splitq_init;
		}
		rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
		qi[k].rx_bufq1_id = cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id);
		qi[k].rx_bufq2_id = cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id);
		qi[k].hdr_buffer_size = cpu_to_le16(rxq->rx_hbuf_size);
		qi[k].rx_buffer_low_watermark = cpu_to_le16(rxq->rx_buffer_low_watermark);

		if (rxq->rx_hsplit_en) {
			qi[k].qflags = cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT);
			qi[k].hdr_buffer_size = cpu_to_le16(rxq->rx_hbuf_size);
		}
skip_splitq_init:
		(...)

More readable to me.
What's 'k' though? Maybe let's think of better variable naming in here?

> 
> > > +
> > > +			qi[k].queue_id =
> > > +				cpu_to_le32(rxq->q_id);
> > > +			qi[k].model =
> > > +				cpu_to_le16(vport->rxq_model);
> > > +			qi[k].type =
> > > +				cpu_to_le32(rxq->q_type);
> > > +			qi[k].ring_len =
> > > +				cpu_to_le16(rxq->desc_count);
> > > +			qi[k].dma_ring_addr =
> > > +				cpu_to_le64(rxq->dma);
> > > +			qi[k].max_pkt_size =
> > > +				cpu_to_le32(rxq->rx_max_pkt_size);
> > > +			qi[k].data_buffer_size =
> > > +				cpu_to_le32(rxq->rx_buf_size);
> > > +			qi[k].qflags |=
> > > +
> > 	cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE);
> > > +			qi[k].desc_ids =
> > > +				cpu_to_le64(rxq->rxdids);
> > > +		}
> > > +	}
> > > +
> > > +	/* Make sure accounting agrees */
> > > +	if (k != totqs) {
> > > +		err = -EINVAL;
> > > +		goto error;
> > > +	}
> > > +
> > > +	/* Chunk up the queue contexts into multiple messages to avoid
> > > +	 * sending a control queue message buffer that is too large
> > > +	 */
> > > +	config_data_size = sizeof(struct virtchnl2_config_rx_queues);
> > > +	chunk_size = sizeof(struct virtchnl2_rxq_info);
> > > +
> > > +	num_chunks = IECM_NUM_CHUNKS_PER_MSG(config_data_size,
> > chunk_size) + 1;
> > > +	if (totqs < num_chunks)
> > > +		num_chunks = totqs;
> > > +
> > > +	num_msgs = totqs / num_chunks;
> > > +	if (totqs % num_chunks)
> > > +		num_msgs++;
> > > +
> > > +	for (i = 0, k = 0; i < num_msgs; i++) {
> > > +		if (!crq || alloc) {
> > > +			buf_size = (chunk_size * (num_chunks - 1)) +
> > > +					config_data_size;
> > > +			kfree(crq);
> > > +			crq = kzalloc(buf_size, GFP_KERNEL);
> > > +			if (!crq) {
> > > +				err = -ENOMEM;
> > > +				goto error;
> > > +			}
> > > +		} else {
> > > +			memset(crq, 0, buf_size);
> > > +		}
> > > +
> > > +		crq->vport_id = cpu_to_le32(vport->vport_id);
> > > +		crq->num_qinfo = cpu_to_le16(num_chunks);
> > > +		memcpy(crq->qinfo, &qi[k], chunk_size * num_chunks);
> > > +
> > > +		err = iecm_send_mb_msg(vport->adapter,
> > > +				       VIRTCHNL2_OP_CONFIG_RX_QUEUES,
> > > +				       buf_size, (u8 *)crq);
> > > +		if (err)
> > > +			goto mbx_error;
> > > +
> > > +		err = iecm_wait_for_event(vport->adapter,
> > IECM_VC_CONFIG_RXQ,
> > > +					  IECM_VC_CONFIG_RXQ_ERR);
> > > +		if (err)
> > > +			goto mbx_error;
> > > +
> > > +		k += num_chunks;
> > > +		totqs -= num_chunks;
> > > +		if (totqs < num_chunks) {
> > > +			num_chunks = totqs;
> > > +			alloc = true;
> > > +		}
> > > +	}
> > > +
> > > +mbx_error:
> > > +	kfree(crq);
> > > +error:
> > > +	kfree(qi);
> > > +	return err;
> > > +}
> > > +

(...)

> > > +/* queue associated with a vport */
> > > +struct iecm_queue {
> > > +	struct device *dev;		/* Used for DMA mapping */
> > > +	struct iecm_vport *vport;	/* Backreference to associated vport
> > */
> > > +	union {
> > > +		struct iecm_txq_group *txq_grp;
> > > +		struct iecm_rxq_group *rxq_grp;
> > > +	};
> > > +	/* bufq: Used as group id, either 0 or 1, on clean Buf Q uses this
> > > +	 *       index to determine which group of refill queues to clean.
> > > +	 *       Bufqs are use in splitq only.
> > > +	 * txq: Index to map between Tx Q group and hot path Tx ptrs stored in
> > > +	 *      vport.  Used in both single Q/split Q
> > > +	 * rxq: Index to total rxq across groups, used for skb reporting
> > > +	 */
> > > +	u16 idx;
> > > +	/* Used for both Q models single and split. In split Q model relevant
> > > +	 * only to Tx Q and Rx Q
> > > +	 */
> > > +	u8 __iomem *tail;
> > > +	/* Used in both single and split Q.  In single Q, Tx Q uses tx_buf and
> > > +	 * Rx Q uses rx_buf.  In split Q, Tx Q uses tx_buf, Rx Q uses skb, and
> > > +	 * Buf Q uses rx_buf.
> > > +	 */
> > > +	union {
> > > +		struct iecm_tx_buf *tx_buf;
> > > +		struct {
> > > +			struct iecm_rx_buf *buf;
> > > +			struct iecm_dma_mem **hdr_buf;
> > > +		} rx_buf;
> > > +		struct sk_buff *skb;
> > > +	};
> > > +	u16 q_type;
> > > +	/* Queue id(Tx/Tx compl/Rx/Bufq) */
> > > +	u32 q_id;
> > > +	u16 desc_count;		/* Number of descriptors */
> > > +
> > > +	/* Relevant in both split & single Tx Q & Buf Q*/
> > > +	u16 next_to_use;
> > > +	/* In split q model only relevant for Tx Compl Q and Rx Q */
> > > +	u16 next_to_clean;	/* used in interrupt processing */
> > > +	/* Used only for Rx. In split Q model only relevant to Rx Q */
> > > +	u16 next_to_alloc;
> > > +	/* Generation bit check stored, as HW flips the bit at Queue end */
> > > +	DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS);
> > > +
> > > +	union iecm_queue_stats q_stats;
> > > +	struct u64_stats_sync stats_sync;
> > > +
> > > +	bool rx_hsplit_en;
> > > +
> > > +	u16 rx_hbuf_size;	/* Header buffer size */
> > > +	u16 rx_buf_size;
> > > +	u16 rx_max_pkt_size;
> > > +	u16 rx_buf_stride;
> > > +	u8 rx_buffer_low_watermark;
> > > +	u64 rxdids;
> > > +	/* Used for both Q models single and split. In split Q model relavant
> > > +	 * only to Tx compl Q and Rx compl Q
> > > +	 */
> > > +	struct iecm_q_vector *q_vector;	/* Backreference to associated vector
> > */
> > > +	unsigned int size;		/* length of descriptor ring in bytes */
> > > +	dma_addr_t dma;			/* physical address of ring */
> > > +	void *desc_ring;		/* Descriptor ring memory */
> > > +
> > > +	u16 tx_buf_key;			/* 16 bit unique "identifier" (index)
> > > +					 * to be used as the completion tag
> > when
> > > +					 * queue is using flow based scheduling
> > > +					 */
> > > +	u16 tx_max_bufs;		/* Max buffers that can be transmitted
> > > +					 * with scatter-gather
> > > +					 */
> > > +	DECLARE_HASHTABLE(sched_buf_hash, 12);
> > > +} ____cacheline_internodealigned_in_smp;
> > > +
> > > +/* Software queues are used in splitq mode to manage buffers between rxq
> > > + * producer and the bufq consumer.  These are required in order to maintain a
> > > + * lockless buffer management system and are strictly software only
> > constructs.
> > > + */
> > > +struct iecm_sw_queue {
> > > +	u16 next_to_clean ____cacheline_aligned_in_smp;
> > > +	u16 next_to_alloc ____cacheline_aligned_in_smp;
> > > +	u16 next_to_use ____cacheline_aligned_in_smp;
> > > +	DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS)
> > > +		____cacheline_aligned_in_smp;
> > > +	u16 *ring ____cacheline_aligned_in_smp;
> > 
> > This will result in this part being FIVE cachelines long for
> > 3 * 2 + 8 + 8 = 22 bytes, i.e. 320 bytes for 22!
> > Just making the entire structure cacheline-aligned after its
> > declaration is enough, these ones are not even an overkill,
> > it's an overslaughter.

Good catch!

> > 
> > > +	u16 desc_count;
> > > +	u16 buf_size;
> > > +	struct device *dev;
> > > +} ____cacheline_internodealigned_in_smp;
> > > +
> > > +/* Splitq only.  iecm_rxq_set associates an rxq with at an array of refillqs.
> > > + * Each rxq needs a refillq to return used buffers back to the respective bufq.
> > > + * Bufqs then clean these refillqs for buffers to give to hardware.
> > > + */
> > > +struct iecm_rxq_set {
> > > +	struct iecm_queue rxq;
> > > +	/* refillqs assoc with bufqX mapped to this rxq */
> > > +	struct iecm_sw_queue *refillq0;
> > > +	struct iecm_sw_queue *refillq1;
> > > +};
> > > +
> > > +/* Splitq only.  iecm_bufq_set associates a bufq to an array of refillqs.
> > > + * In this bufq_set, there will be one refillq for each rxq in this rxq_group.
> > > + * Used buffers received by rxqs will be put on refillqs which bufqs will
> > > + * clean to return new buffers back to hardware.
> > > + *
> > > + * Buffers needed by some number of rxqs associated in this rxq_group are
> > > + * managed by at most two bufqs (depending on performance configuration).
> > > + */
> > > +struct iecm_bufq_set {
> > > +	struct iecm_queue bufq;
> > > +	/* This is always equal to num_rxq_sets in iecm_rxq_group */
> > > +	int num_refillqs;
> > > +	struct iecm_sw_queue *refillqs;
> > > +};
> > > +
> > > +/* In singleq mode, an rxq_group is simply an array of rxqs.  In splitq, a
> > > + * rxq_group contains all the rxqs, bufqs and refillqs needed to
> > > + * manage buffers in splitq mode.
> > > + */
> > > +struct iecm_rxq_group {
> > > +	struct iecm_vport *vport; /* back pointer */
> > > +
> > > +	union {
> > > +		struct {
> > > +			int num_rxq;
> > > +			/* store queue pointers */
> > > +			struct iecm_queue *rxqs[IECM_LARGE_MAX_Q];
> > > +		} singleq;
> > > +		struct {
> > > +			int num_rxq_sets;
> > > +			/* store queue pointers */
> > > +			struct iecm_rxq_set *rxq_sets[IECM_LARGE_MAX_Q];
> > > +			struct iecm_bufq_set *bufq_sets;
> > > +		} splitq;
> > > +	};
> > > +};
> > > +
> > > +/* Between singleq and splitq, a txq_group is largely the same except for the
> > > + * complq.  In splitq a single complq is responsible for handling completions
> > > + * for some number of txqs associated in this txq_group.
> > > + */
> > > +struct iecm_txq_group {
> > > +	struct iecm_vport *vport; /* back pointer */
> > > +
> > > +	int num_txq;
> > > +	/* store queue pointers */
> > > +	struct iecm_queue *txqs[IECM_LARGE_MAX_Q];
> > > +
> > > +	/* splitq only */
> > > +	struct iecm_queue *complq;
> > > +};
> > > +
> > > +struct iecm_adapter;
> > > +
> > > +void iecm_vport_init_num_qs(struct iecm_vport *vport,
> > > +			    struct virtchnl2_create_vport *vport_msg);
> > > +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport);
> > > +void iecm_vport_calc_total_qs(struct iecm_adapter *adapter,
> > > +			      struct virtchnl2_create_vport *vport_msg);
> > > +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport);
> > > +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport);
> > >  irqreturn_t
> > >  iecm_vport_intr_clean_queues(int __always_unused irq, void *data);
> > >  #endif /* !_IECM_TXRX_H_ */
> > > --
> > > 2.33.0
> > 
> > Thanks,
> > Al
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  reply	other threads:[~2022-02-03 10:08 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-28  0:09 [Intel-wired-lan] [PATCH net-next 00/19] Add iecm and idpf Alan Brady
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 01/19] virtchnl: Add new virtchnl2 ops Alan Brady
2022-02-02 22:13   ` Brady, Alan
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 02/19] iecm: add basic module init and documentation Alan Brady
2022-01-28 11:56   ` Alexander Lobakin
2022-02-02 22:15     ` Brady, Alan
2022-02-01 19:44   ` Shannon Nelson
2022-02-03  3:08     ` Brady, Alan
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 03/19] iecm: add probe and remove Alan Brady
2022-02-01 20:02   ` Shannon Nelson
2022-02-03  3:13     ` Brady, Alan
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 04/19] iecm: add api_init and controlq init Alan Brady
2022-01-28 12:09   ` Alexander Lobakin
2022-02-02 22:16     ` Brady, Alan
2022-02-01 21:26   ` Shannon Nelson
2022-02-03  3:24     ` Brady, Alan
2022-02-03  3:40       ` Brady, Alan
2022-02-03  5:26         ` Shannon Nelson
2022-02-03 13:13       ` Alexander Lobakin
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 05/19] iecm: add vport alloc and virtchnl messages Alan Brady
2022-01-28  4:19   ` kernel test robot
2022-01-28 12:39     ` Alexander Lobakin
2022-02-02 22:23       ` Brady, Alan
2022-01-28 12:32   ` Alexander Lobakin
2022-02-02 22:21     ` Brady, Alan
2022-02-03 13:23       ` Alexander Lobakin
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 06/19] iecm: add virtchnl messages for queues Alan Brady
2022-01-28 13:03   ` Alexander Lobakin
2022-02-02 22:48     ` Brady, Alan
2022-02-03 10:08       ` Maciej Fijalkowski [this message]
2022-02-03 14:09       ` Alexander Lobakin
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 07/19] iecm: finish virtchnl messages Alan Brady
2022-01-28 13:19   ` Alexander Lobakin
2022-02-02 23:06     ` Brady, Alan
2022-02-03 15:05       ` Alexander Lobakin
2022-02-03 15:16         ` Maciej Fijalkowski
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 08/19] iecm: add interrupts and configure netdev Alan Brady
2022-01-28 13:34   ` Alexander Lobakin
2022-02-02 23:17     ` Brady, Alan
2022-02-03 15:55       ` Alexander Lobakin
2022-01-28  0:09 ` [Intel-wired-lan] [PATCH net-next 09/19] iecm: alloc vport TX resources Alan Brady
2022-02-02 23:45   ` Brady, Alan
2022-02-03 17:56     ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 10/19] iecm: alloc vport RX resources Alan Brady
2022-01-28 14:16   ` Alexander Lobakin
2022-02-03  0:13     ` Brady, Alan
2022-02-03 18:29       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 11/19] iecm: add start_xmit and set_rx_mode Alan Brady
2022-01-28 16:35   ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 12/19] iecm: finish netdev_ops Alan Brady
2022-01-28 17:06   ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 13/19] iecm: implement splitq napi_poll Alan Brady
2022-01-28  5:21   ` kernel test robot
2022-01-28 17:44     ` Alexander Lobakin
2022-02-03  1:15       ` Brady, Alan
2022-01-28 17:38   ` Alexander Lobakin
2022-02-03  1:07     ` Brady, Alan
2022-02-04 11:50       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 14/19] iecm: implement singleq napi_poll Alan Brady
2022-01-28 17:57   ` Alexander Lobakin
2022-02-03  1:45     ` Brady, Alan
2022-02-03 19:05       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 15/19] iecm: implement ethtool callbacks Alan Brady
2022-01-28 18:13   ` Alexander Lobakin
2022-02-03  2:13     ` Brady, Alan
2022-02-03 19:54       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 16/19] iecm: implement flow director Alan Brady
2022-01-28 19:04   ` Alexander Lobakin
2022-02-03  2:41     ` Brady, Alan
2022-02-04 10:08       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 17/19] iecm: implement cloud filters Alan Brady
2022-01-28 19:38   ` Alexander Lobakin
2022-02-03  2:53     ` Brady, Alan
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 18/19] iecm: add advanced rss Alan Brady
2022-01-28 19:53   ` Alexander Lobakin
2022-02-03  2:55     ` Brady, Alan
2022-02-03 10:46       ` Maciej Fijalkowski
2022-02-04 10:22       ` Alexander Lobakin
2022-01-28  0:10 ` [Intel-wired-lan] [PATCH net-next 19/19] idpf: introduce idpf driver Alan Brady
2022-01-28 20:08   ` Alexander Lobakin
2022-02-03  3:07     ` Brady, Alan
2022-02-04 10:35       ` Alexander Lobakin
2022-02-04 12:05 ` [Intel-wired-lan] [PATCH net-next 00/19] Add iecm and idpf Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yfupqv3/adFjuI3G@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=intel-wired-lan@osuosl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox