From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id C751EF9D0DA for ; Tue, 14 Apr 2026 16:12:34 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E813C40E01; Tue, 14 Apr 2026 18:10:41 +0200 (CEST) Received: from mail-oa1-f49.google.com (mail-oa1-f49.google.com [209.85.160.49]) by mails.dpdk.org (Postfix) with ESMTP id 3703D40B9A for ; Tue, 14 Apr 2026 18:10:39 +0200 (CEST) Received: by mail-oa1-f49.google.com with SMTP id 586e51a60fabf-42321c8b8f5so4633087fac.1 for ; Tue, 14 Apr 2026 09:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20251104.gappssmtp.com; s=20251104; t=1776183038; x=1776787838; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ptgpR4n6Cp7PgN9ERmAHbIMLx9/65nDPjgF5NnCQJr8=; b=EsL4jKV+5vL6zM7G517z12cXHqNNkCaTfG8zCW6LL39lpAjiNkC5/4GWjf0UeI6cj8 s/QiDOEtQlnpCm/B11cx9nwh8/YY5XNrIKTk7a+2dB+6iiWneTno65zCE7zBzm5V8uVx I4Un6UJsWxIJee3d8OCytCoguOA0tT/MOryPg8nBn64aQVEIzjxZc0sjot5Gnc4w5lbG o5gmM1lfo7NSQmrT7sTVuJbKVUFwiLI+elQy50EUbQ+OiuJjcoZYYdOFwaXP7OjuQqSx gdIjgqgTuNgcyBInP5sCByByBqZYHgvpuLH5cwYmrWBFZMHpaMiy4gxnGDylY0dmITxt tlfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776183038; x=1776787838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ptgpR4n6Cp7PgN9ERmAHbIMLx9/65nDPjgF5NnCQJr8=; b=hEBF2ggbKYpjQIIph2e5Yl+SsJ/Jx6fGwUWfeK6q/EGuNFIU/fTFIYXBShPgM9SghU rJA0wV5h3oZ4quSQGygHBA83EhMXMh/UuP7qteWYZ8aEL1aotcj/1EsXALFDuN88CJZA 6WyGOedyrR2qTAiR2G+saHc+AvliKGu/zTlwr82JbFppuvWkxUGLc6835bSZypi/XBlQ yeEOZlhXLWkRCnb5XPd2i8tGOXLpCs7ZXGHdNkyHNtAJXCAo6cMFGtn+DaCk2IS+P+5z O+bpH8VspDM80lcbn3+etOTeqJSu5x/jh+Fx/Mk5E6Xgb4lmcD5AaMqPQU4UZ09Ih6zu iVzw== X-Gm-Message-State: AOJu0YwQMmJFxmGbqWL6dlvCNtBUVFUiXKVqbQQGBFWLR/NVDJh9EK6h adftp97+u5yRBK+Q+c5+fjr7zMJtzYWEZDc4pukyM+5BSt8/3SSSEfLoxihgD7ACYEKlHZ+Y8aX 7GEs7 X-Gm-Gg: AeBDiet+cWhDLRGFjhOvyF+oKJp79rQbwZAgWVgrMJtQG+tCiAonZo/Xgy8iZR6daqt oNp5daqyrlhutlSH8hftmMw0B+EVYa7NIdHrAJ5mlM66mMYizY33BjBgpKgipQGLeP79i0VqaSR fkjOHv1+33hp3HTMmfF3s7nzcinwGnKUzsxQQraODv7BuIThWveZ7uRGj0yRsBGncXaR4Qo1qxK W48vfgj8qBqfcHqyXfBzFETUZ0qBJMsRuMTMoRWP1ocUjVcQLuktP6sIBonufOsrjr/ptvXuLqq upt7w5n8eeamkqga/6hh9kDlSbi5rAGyH3X1WVxDZz3W+er9yqBlkSAHWWi571mqYJdGAlnjK9P 8ifMFONF9Jlhf5iAvUMayg2qx5U9c4exZ+SEF80Ce4e0MGqyoerU49nqzXJ2humLnIuC9TPUbUf r4YYPQ6CRwqWEK1O1BxppIMzLt1Xnj7QlAeferRiB07qw= X-Received: by 2002:a05:6871:c967:b0:423:4260:2e0d with SMTP id 586e51a60fabf-423bccab0aamr10791216fac.8.1776183038265; Tue, 14 Apr 2026 09:10:38 -0700 (PDT) Received: from phoenix.lan ([104.202.41.210]) by smtp.gmail.com with ESMTPSA id 586e51a60fabf-423dd396960sm11809554fac.2.2026.04.14.09.10.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2026 09:10:37 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v22 21/24] net/pcap: add Rx scatter offload Date: Tue, 14 Apr 2026 09:08:12 -0700 Message-ID: <20260414161011.756101-22-stephen@networkplumber.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260414161011.756101-1-stephen@networkplumber.org> References: <20260106182823.192350-1-stephen@networkplumber.org> <20260414161011.756101-1-stephen@networkplumber.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add RTE_ETH_RX_OFFLOAD_SCATTER to the advertised receive offload capabilities if not using infinite_rx mode. Validate in rx_queue_setup that the mbuf pool data room is large enough when scatter is not enabled, following the same pattern as the virtio driver. Gate the multi-segment receive path on the scatter offload flag and drop oversized packets when scatter is disabled. Reject scatter with infinite_rx mode since the ring-based replay path does not support multi-segment mbufs. Signed-off-by: Stephen Hemminger Acked-by: Bruce Richardson --- drivers/net/pcap/pcap_ethdev.c | 47 ++++++++++++++++++++++++++++++++-- 1 file changed, 45 insertions(+), 2 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index 602b898a72..c2dec9167d 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -79,6 +79,7 @@ struct pcap_rx_queue { uint16_t port_id; uint16_t queue_id; bool vlan_strip; + bool rx_scatter; bool timestamp_offloading; struct rte_mempool *mb_pool; struct queue_stat rx_stat; @@ -112,6 +113,7 @@ struct pmd_internals { bool phy_mac; bool infinite_rx; bool vlan_strip; + bool rx_scatter; bool timestamp_offloading; }; @@ -342,14 +344,19 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* pcap packet will fit in the mbuf, can copy it */ rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet, len); mbuf->data_len = len; - } else { - /* Try read jumbo frame into multi mbufs. */ + } else if (pcap_q->rx_scatter) { + /* Scatter into multi-segment mbufs. */ if (unlikely(eth_pcap_rx_jumbo(pcap_q->mb_pool, mbuf, packet, len) == -1)) { pcap_q->rx_stat.err_pkts++; rte_pktmbuf_free(mbuf); break; } + } else { + /* Packet too large and scatter not enabled, drop it. */ + pcap_q->rx_stat.err_pkts++; + rte_pktmbuf_free(mbuf); + continue; } mbuf->pkt_len = len; @@ -926,6 +933,7 @@ eth_dev_configure(struct rte_eth_dev *dev) const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode; internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); + internals->rx_scatter = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER); internals->timestamp_offloading = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP); if (internals->timestamp_offloading && timestamp_rx_dynflag == 0) { @@ -958,6 +966,9 @@ eth_dev_info(struct rte_eth_dev *dev, dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_TIMESTAMP; + if (!internals->infinite_rx) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER; + return 0; } @@ -1117,11 +1128,37 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, { struct pmd_internals *internals = dev->data->dev_private; struct pcap_rx_queue *pcap_q = &internals->rx_queue[rx_queue_id]; + uint16_t buf_size; + bool rx_scatter; + + buf_size = rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_scatter = !!(dev->data->dev_conf.rxmode.offloads & + RTE_ETH_RX_OFFLOAD_SCATTER); + + /* + * If Rx scatter is not enabled, verify that the mbuf data room + * can hold the largest received packet in a single segment. + * Use the MTU-derived frame size as the expected maximum, not + * snapshot_len which is a capture truncation limit rather than + * an expected packet size. + */ + if (!rx_scatter) { + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN; + + if (max_rx_pktlen > buf_size) { + PMD_LOG(ERR, + "Rx scatter is disabled and RxQ mbuf pool object size is too small " + "(buf_size=%u, max_rx_pkt_len=%u)", + buf_size, max_rx_pktlen); + return -EINVAL; + } + } pcap_q->mb_pool = mb_pool; pcap_q->port_id = dev->data->port_id; pcap_q->queue_id = rx_queue_id; pcap_q->vlan_strip = internals->vlan_strip; + pcap_q->rx_scatter = rx_scatter; dev->data->rx_queues[rx_queue_id] = pcap_q; pcap_q->timestamp_offloading = internals->timestamp_offloading; @@ -1134,6 +1171,12 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, pcap_t **pcap; bool save_vlan_strip; + if (rx_scatter) { + PMD_LOG(ERR, + "Rx scatter is not supported with infinite_rx mode"); + return -EINVAL; + } + pp = rte_eth_devices[pcap_q->port_id].process_private; pcap = &pp->rx_pcap[pcap_q->queue_id]; -- 2.53.0