From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9DC7FD0049 for ; Sun, 1 Mar 2026 02:08:38 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDA3040E21; Sun, 1 Mar 2026 03:07:46 +0100 (CET) Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by mails.dpdk.org (Postfix) with ESMTP id E6CA240E0A for ; Sun, 1 Mar 2026 03:07:43 +0100 (CET) Received: by mail-qv1-f42.google.com with SMTP id 6a1803df08f44-899a2f4cdddso40342686d6.2 for ; Sat, 28 Feb 2026 18:07:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1772330863; x=1772935663; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S2FtoWgZnKknkg/vHVuI8yJjVomIpitllAADvPxQrGQ=; b=UR2fvuyRGv49SOx+SrjFqUlJENj3EQBBubJxRpJtlTnxS+yoolM8mDxFuam9j/SFP8 0vdZReUG7VrXMVH+6wBX/Vki4QoNNnxsggsYpfvAC2Jmw18aOM+R5SsneHq36q2V8fvb 8R325WF5U86LlokAmU5rBaOqfm2J2caZvgcG83tFbZkhr5XfCGjs2rFdmUtJ9NDdMe4v 1wZNjAAVfIhT4oMs8ir5aqLvum0awsILS63VhB635KfjblhVJAbFuVhLEjACIypgHuup 1hhCbRujKWufXb+r1b5rQEFdbDCbV8rBxomwO8utfqCMqEs65g76M6eQcCjhSXZSW3IP Lsrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772330863; x=1772935663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=S2FtoWgZnKknkg/vHVuI8yJjVomIpitllAADvPxQrGQ=; b=kewT4bXP7P+JVtP0gS/CMrDgM+01CPLTkJS11TVg17InFGFxYXZBAdtI0aAjZVGA3k rKp2mds9OLTEypMyBEQYQMg8jAX/U1ZoFLy77bZpOuS/wL0KXQyGBY5/9ILart+PU//z NWwDBTxchdv0Y10dhMzuW/befbuODNzE5muVVf7qybz9ZiCHKrwNNltybGUPy9tRz77q RLw9vMgIDXyNb/gPHOnJcRBBwt91c1Dkkvdd9Dv0uV/Qdi4LtTy7mfYJpjyHSUuBSgTF j3r65VTg+Q4iszaV5xZ+vTNyniM87UWEvFYtgc88rHw81ptcZxLE+w6qPovGSJgp1qXI luNQ== X-Gm-Message-State: AOJu0Yy4w/ywCYO/bAQ/5phF4a8EAJo9bhcVQfO+0276ScPsz/zkSOtJ 5VrKOdzBsdabqEanrtsav2GTbA4Q6bE3njxdgxpyhxSmDlVNJWt2B7QsZn8vHiNT3zCAmymexUL HFAFB X-Gm-Gg: ATEYQzxGiUIIsmW9dUfR+ZSe1h813xjQA+mY3apZpW57yxmiVr8fbNQ/JfHlBS3laXN 4XHx7O2L26JcnXzZiGRQTGKF68jJe5NleiheTZNj17LxWzF/xZQ3fv/BNt4c5WjtDNTCmKzjoZw cbc8Ui6Qk9Tsg6mZLdDCpd1AhRW40yJnA3a29UdyZUhxJZMtQAbaSEBNOBgHMmIJn4l0g0uIz94 PTsr/VWquZ7rbJzMn+WUs9hCireBU5dyQfS1y0eVKBNuM19Qi0hHKMNM7G4mxfmVtE6+CuLO3VU iu8IaPxwqs3myArq8JkEvXh5197bhxjQ69dKKi9Uah39XjX54zaFxwS/1NK+qdyn6zBkJSWGxKG 44t34pUKEPLtK13O2HwQAclH2dw41OP8YyuqiBgh0OkfZUstYK0rsZB1nH2tdJgeHWcL/8BXw9W ridugwqsVlRTkqjvLuWP3VK0V/ONlPpRFbK3g6nXsIhE0WmTB66PGa+EqGACnJxg== X-Received: by 2002:a05:620a:44cd:b0:8cb:62c3:3690 with SMTP id af79cd13be357-8cbc8d734cemr1063752285a.13.1772330863114; Sat, 28 Feb 2026 18:07:43 -0800 (PST) Received: from phoenix.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id af79cd13be357-8cbbf736bffsm802292985a.50.2026.02.28.18.07.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 28 Feb 2026 18:07:42 -0800 (PST) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v18 08/23] net/pcap: replace stack bounce buffer Date: Sat, 28 Feb 2026 18:05:41 -0800 Message-ID: <20260301020726.852401-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260301020726.852401-1-stephen@networkplumber.org> References: <20260106182823.192350-1-stephen@networkplumber.org> <20260301020726.852401-1-stephen@networkplumber.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the 64K stack-allocated bounce buffer with a per-queue buffer allocated from hugepages via rte_malloc at queue setup. This is necessary because the buffer may be used in a secondary process transmit path where the primary process allocated it. Signed-off-by: Stephen Hemminger --- drivers/net/pcap/pcap_ethdev.c | 60 +++++++++++++++++++++------------- 1 file changed, 37 insertions(+), 23 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index fedf461be4..72a297d423 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -91,6 +91,9 @@ struct pcap_tx_queue { struct queue_stat tx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; + + /* Temp buffer used for non-linear packets */ + uint8_t *bounce_buf; }; struct pmd_internals { @@ -385,18 +388,17 @@ static uint16_t eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { unsigned int i; - struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *dumper_q = queue; uint16_t num_tx = 0; uint32_t tx_bytes = 0; struct pcap_pkthdr header; pcap_dumper_t *dumper; - unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; - size_t len, caplen; + unsigned char *temp_data; pp = rte_eth_devices[dumper_q->port_id].process_private; dumper = pp->tx_dumper[dumper_q->queue_id]; + temp_data = dumper_q->bounce_buf; if (dumper == NULL || nb_pkts == 0) return 0; @@ -404,12 +406,11 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* writes the nb_pkts packets to the previously opened pcap file * dumper */ for (i = 0; i < nb_pkts; i++) { - mbuf = bufs[i]; - len = caplen = rte_pktmbuf_pkt_len(mbuf); - if (unlikely(!rte_pktmbuf_is_contiguous(mbuf) && - len > sizeof(temp_data))) { - caplen = sizeof(temp_data); - } + struct rte_mbuf *mbuf = bufs[i]; + size_t len, caplen; + + len = rte_pktmbuf_pkt_len(mbuf); + caplen = RTE_MIN(len, RTE_ETH_PCAP_SNAPSHOT_LEN); calculate_timestamp(&header.ts); header.len = len; @@ -449,9 +450,6 @@ eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) uint32_t tx_bytes = 0; struct pcap_tx_queue *tx_queue = queue; - if (unlikely(nb_pkts == 0)) - return 0; - for (i = 0; i < nb_pkts; i++) { tx_bytes += bufs[i]->pkt_len; rte_pktmbuf_free(bufs[i]); @@ -460,7 +458,7 @@ eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tx_queue->tx_stat.pkts += nb_pkts; tx_queue->tx_stat.bytes += tx_bytes; - return i; + return nb_pkts; } /* @@ -470,30 +468,30 @@ static uint16_t eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { unsigned int i; - int ret; - struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *tx_queue = queue; uint16_t num_tx = 0; uint32_t tx_bytes = 0; pcap_t *pcap; - unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; - size_t len; + unsigned char *temp_data; pp = rte_eth_devices[tx_queue->port_id].process_private; pcap = pp->tx_pcap[tx_queue->queue_id]; + temp_data = tx_queue->bounce_buf; if (unlikely(nb_pkts == 0 || pcap == NULL)) return 0; for (i = 0; i < nb_pkts; i++) { - mbuf = bufs[i]; - len = rte_pktmbuf_pkt_len(mbuf); + struct rte_mbuf *mbuf = bufs[i]; + size_t len = rte_pktmbuf_pkt_len(mbuf); + int ret; + if (unlikely(!rte_pktmbuf_is_contiguous(mbuf) && - len > sizeof(temp_data))) { + len > RTE_ETH_PCAP_SNAPSHOT_LEN)) { PMD_LOG(ERR, - "Dropping multi segment PCAP packet. Size (%zd) > max size (%zd).", - len, sizeof(temp_data)); + "Dropping multi segment PCAP packet. Size (%zd) > max size (%u).", + len, RTE_ETH_PCAP_SNAPSHOT_LEN); rte_pktmbuf_free(mbuf); continue; } @@ -966,7 +964,7 @@ static int eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc __rte_unused, - unsigned int socket_id __rte_unused, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf __rte_unused) { struct pmd_internals *internals = dev->data->dev_private; @@ -974,11 +972,26 @@ eth_tx_queue_setup(struct rte_eth_dev *dev, pcap_q->port_id = dev->data->port_id; pcap_q->queue_id = tx_queue_id; + pcap_q->bounce_buf = rte_malloc_socket(NULL, RTE_ETH_PCAP_SNAPSHOT_LEN, + RTE_CACHE_LINE_SIZE, socket_id); + if (pcap_q->bounce_buf == NULL) + return -ENOMEM; + dev->data->tx_queues[tx_queue_id] = pcap_q; return 0; } +static void +eth_tx_queue_release(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct pmd_internals *internals = dev->data->dev_private; + struct pcap_tx_queue *pcap_q = &internals->tx_queue[tx_queue_id]; + + rte_free(pcap_q->bounce_buf); + pcap_q->bounce_buf = NULL; +} + static int eth_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1019,6 +1032,7 @@ static const struct eth_dev_ops ops = { .dev_infos_get = eth_dev_info, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, + .tx_queue_release = eth_tx_queue_release, .rx_queue_start = eth_rx_queue_start, .tx_queue_start = eth_tx_queue_start, .rx_queue_stop = eth_rx_queue_stop, -- 2.51.0