From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ophir Munk Subject: [RFC 1/2] net/tap: calculate checksum for multi segs packets Date: Fri, 9 Mar 2018 21:10:25 +0000 Message-ID: <1520629826-23055-2-git-send-email-ophirmu@mellanox.com> References: <1520629826-23055-1-git-send-email-ophirmu@mellanox.com> Mime-Version: 1.0 Content-Type: text/plain Cc: Thomas Monjalon , Olga Shern , Ophir Munk To: dev@dpdk.org, Pascal Mazon Return-path: Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0047.outbound.protection.outlook.com [104.47.1.47]) by dpdk.org (Postfix) with ESMTP id 01D2E56A1 for ; Fri, 9 Mar 2018 22:10:50 +0100 (CET) In-Reply-To: <1520629826-23055-1-git-send-email-ophirmu@mellanox.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In past TAP implementation checksum offload calculations (for IP/UDP/TCP) were skipped in case of a multi segments packet. This commit improves TAP functionality by enabling checksum calculations in multi segments cases. The only restriction now is that the first segment must contain all headers of layers 2, 3 and 4 (where layer 4 header size is taken as that of TCP). Signed-off-by: Ophir Munk --- drivers/net/tap/rte_eth_tap.c | 42 ++++++++++++++++++++++++++++++++---------- 1 file changed, 32 insertions(+), 10 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index f09db0e..f312084 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -496,6 +496,9 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) char m_copy[mbuf->data_len]; int n; int j; + int k; /* first index in iovecs for copying segments */ + uint16_t l234_len; /* length of layers 2,3,4 headers */ + uint16_t seg_len; /* length of first segment */ /* stats.errs will be incremented */ if (rte_pktmbuf_pkt_len(mbuf) > max_size) @@ -503,25 +506,44 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) iovecs[0].iov_base = π iovecs[0].iov_len = sizeof(pi); - for (j = 1; j <= mbuf->nb_segs; j++) { - iovecs[j].iov_len = rte_pktmbuf_data_len(seg); - iovecs[j].iov_base = - rte_pktmbuf_mtod(seg, void *); - seg = seg->next; - } + k = 1; if (txq->csum && ((mbuf->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4) || (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM || (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM))) { - /* Support only packets with all data in the same seg */ - if (mbuf->nb_segs > 1) + /* Only support packets with at least layer 4 + * header included in the first segment + */ + seg_len = rte_pktmbuf_data_len(mbuf); + l234_len = mbuf->l2_len + mbuf->l3_len + + sizeof(struct tcp_hdr); + if (seg_len < l234_len) break; - /* To change checksums, work on a copy of data. */ + + /* To change checksums, work on a + * copy of l2, l3 l4 headers. + */ rte_memcpy(m_copy, rte_pktmbuf_mtod(mbuf, void *), - rte_pktmbuf_data_len(mbuf)); + l234_len); tap_tx_offload(m_copy, mbuf->ol_flags, mbuf->l2_len, mbuf->l3_len); iovecs[1].iov_base = m_copy; + iovecs[1].iov_len = l234_len; + k++; + /* Adjust data pointer beyond l2, l3, l4 headers. + * If this segment becomes empty - skip it + */ + if (seg_len > l234_len) { + rte_pktmbuf_adj(mbuf, l234_len); + } else { + seg = seg->next; + mbuf->nb_segs--; + } + } + for (j = k; j <= mbuf->nb_segs; j++) { + iovecs[j].iov_len = rte_pktmbuf_data_len(seg); + iovecs[j].iov_base = rte_pktmbuf_mtod(seg, void *); + seg = seg->next; } /* copy the tx frame data */ n = writev(txq->fd, iovecs, mbuf->nb_segs + 1); -- 2.7.4