From mboxrd@z Thu Jan 1 00:00:00 1970 From: Olivier Matz Subject: [PATCH] igb: fix crash with offload on 82575 chipset Date: Fri, 25 Mar 2016 11:32:00 +0100 Message-ID: <1458901920-21677-1-git-send-email-olivier.matz@6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: wenzhuo.lu@intel.com To: dev@dpdk.org Return-path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 796055323 for ; Fri, 25 Mar 2016 11:32:19 +0100 (CET) List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On the 82575 chipset, there is a pool of global TX contexts instead of 2 per queues on 82576. See Table A-1 "Changes in Programming Interface Relative to 82575" of Intel=C2=AE 82576EB GbE Controller datasheet (*). In the driver, the contexts are attributed to a TX queue: 0-1 for txq0, 2-3 for txq1, and so on. In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the per-queue context (0 or 1), and ctx_idx contains the index to be given to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must be indexed with ctx_curr to avoid an out-of-bound access. Also, the index returned by what_advctx_update() is the per-queue index (0 or 1), so we need to add txq->ctx_start before sending it to the hardware. (*) The datasheets says 16 global contexts, however the IDX fields in TX descriptors are 3 bits, which gives a total of 8 contexts. The driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs. Fixes: 4c8db5f09a ("igb: enable TSO support") Fixes: af75078fec ("first public release") Signed-off-by: Olivier Matz --- drivers/net/e1000/igb_rxtx.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index e527895..529dba4 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -325,9 +325,9 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, } =20 txq->ctx_cache[ctx_curr].flags =3D ol_flags; - txq->ctx_cache[ctx_idx].tx_offload.data =3D + txq->ctx_cache[ctx_curr].tx_offload.data =3D tx_offload_mask.data & tx_offload.data; - txq->ctx_cache[ctx_idx].tx_offload_mask =3D tx_offload_mask; + txq->ctx_cache[ctx_curr].tx_offload_mask =3D tx_offload_mask; =20 ctx_txd->type_tucmd_mlhl =3D rte_cpu_to_le_32(type_tucmd_mlhl); vlan_macip_lens =3D (uint32_t)tx_offload.data; @@ -450,7 +450,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **t= x_pkts, ctx =3D what_advctx_update(txq, tx_ol_req, tx_offload); /* Only allocate context descriptor if required*/ new_ctx =3D (ctx =3D=3D IGB_CTX_NUM); - ctx =3D txq->ctx_curr; + ctx =3D txq->ctx_curr + txq->ctx_start; tx_last =3D (uint16_t) (tx_last + new_ctx); } if (tx_last >=3D txq->nb_tx_desc) --=20 2.1.4