From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 522C6FF8867 for ; Wed, 29 Apr 2026 09:59:01 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9248640679; Wed, 29 Apr 2026 11:58:26 +0200 (CEST) Received: from mx1.wirefilter.com (mx1.wirefilter.com [82.147.223.86]) by mails.dpdk.org (Postfix) with ESMTP id F4135402EA; Mon, 27 Apr 2026 18:42:53 +0200 (CEST) Received: from egw.wirefilter.com (localhost.localdomain [127.0.0.1]) by mx1.wirefilter.com (Proxmox) with ESMTP id 3EF98C179E; Mon, 27 Apr 2026 19:42:53 +0300 (+03) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=wirefilter.com; h=cc:cc:content-type:content-type:date:from:from:in-reply-to :message-id:mime-version:references:reply-to:subject:subject:to :to; s=default; bh=AZfGARdEQ0adlWEMwvtXXtJTTcohCtMebRwGV8SMnXQ=; b= aBKLHCa0YiPkrI/DL7UZZnSH+tPVZimYYJJApiZO0BVOPtnh6be5q8ej5n72Irbn ypKElo8jqzoGmINSIP4Vm7hYBX4p1htMntbRncDg3YfXHYPiJBzGT+qkNRBjn4Yu IO5yCYEV3q+hvmC7Y/ic+qq/F37BxFnPbEc7kTITKwXLarHH00EptWOdQ/kMGu6a 6cVMBNjfoqAqqG7Gb30IPF0Q4P6SeJ2HYx7Ze9vl7DxcnJSCmOsD5jsC4l+fkMIe /cI5E7aSKC9eUBGxCSV5bt+xExXDl7ze9/CZd2o7N+FKWUnSA9H0e3Ln4gSQN+aZ 4AbtPmStGnLkkXIeROoPpA== Date: Mon, 27 Apr 2026 19:42:52 +0300 (AST) From: Abdulrahman Alshawi To: dev Cc: bharat , stable Message-ID: <48263833.15868423.1777308172969.JavaMail.zimbra@wirefilter.com> In-Reply-To: <2136264989.15868282.1777307418083.JavaMail.zimbra@wirefilter.com> References: <2136264989.15868282.1777307418083.JavaMail.zimbra@wirefilter.com> Subject: [PATCH 2/2] net/cxgbe: restrict rte flow rules to ingress port MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="=_a413c8da-d1af-44ec-b80c-b5f3e5ca96d9" X-Originating-IP: [10.1.1.3] X-Mailer: Zimbra 10.1.16_GA_4850 (ZimbraModernWebClient - GC147 (Mac)/10.1.16_GA_4850) Thread-Topic: net/cxgbe: restrict rte flow rules to ingress port Thread-Index: vibgbhNHf0ETdtVkwLuwbZrWfuFjrSUCB5/6 X-Mailman-Approved-At: Wed, 29 Apr 2026 11:58:18 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --=_a413c8da-d1af-44ec-b80c-b5f3e5ca96d9 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable =20 Chelsio filters are programmed in adapter-wide LE/TCAM tables shared by al= l ports. rte_flow rules, however, are created on a specific ethdev and are = expected to apply to traffic arriving on that port. =C2=A0 The PMD already = supports ingress-port matching in the hardware filter spec. The iport field= is validated, used for hash-region selection when tp.port_shift is availab= le, and emitted in the firmware filter work request. But the rte_flow parse= r never sets fs.val.iport/fs.mask.iport for normal per-port rules. =C2=A0 A= s a result, a rule created on one port is installed as an adapter-wide matc= h and can steer traffic received on sibling ports of the same adapter. =C2= =A0 In practice this causes cross-port steering. For example, a rule like = =C2=A0 =C2=A0 vlan 100 -> queue 3 =C2=A0 created on port 0 can also match V= LAN 100 traffic arriving on port 1 and redirect it into port 0's queue 3. = =C2=A0 Fix this by stamping the creating ethdev's physical ingress port int= o the filter spec before filter placement is decided. =C2=A0 Only do this w= hen the active filter mode includes the port field (tp.port_shift >=3D 0). = If port matching is not available in the current filter mode, keep the exis= ting adapter-wide behavior. =C2=A0 Reproduce (two ports of the same adapter= bound to DPDK): =C2=A0 =C2=A0 dpdk-testpmd -l 1-9 -a 0000:18:00.4 -a 0000:= 18:00.5 \ =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -- --rxq=3D4 --txq=3D4 --forwa= rd-mode=3Drxonly -i =C2=A0 testpmd> flow create 0 ingress pattern eth \ =C2= =A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 / vlan tci is 100 / end actions queue= index 3 / end =C2=A0 testpmd> start =C2=A0 Without this patch, VLAN 100 tr= affic received on port 1 can be steered by the rule created on port 0. With= the patch, the rule only matches traffic arriving on port 0. =C2=A0 Signed= -off-by: Abdulrahman Alshawi --- =C2=A0.mailmap =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |= =C2=A0 1 + =C2=A0drivers/net/cxgbe/cxgbe_flow.c | 19 +++++++++++++++++++ = =C2=A02 files changed, 20 insertions(+) =C2=A0 diff --git a/.mailmap b/.mai= lmap index 0e0d83e1c6..a6bcbd5756 100644 --- a/.mailmap +++ b/.mailmap @@ -= 4,6 +4,7 @@ Aaro Koskinen =C2=A0Aaron Campbell =C2=A0Aaron Conole =C2=A0Abdullah =C3=96= mer Yama=C3=A7 +Abdulr= ahman Alshawi =C2=A0Abdullah Sevincer =C2=A0Abed Kamaluddin =C2=A0Abhi= jit Gangurde diff --git a/drivers/net/cxgbe/cxgb= e_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index 14b9b49792..dd0634131e 1006= 44 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.= c @@ -172,6 +172,24 @@ cxgbe_fill_filter_region(struct adapter *adap, =C2= =A0 fs->cap =3D 1; /* use hash region */ =C2=A0} =C2=A0 +static void +cxgbe= _scope_flow_to_port(struct rte_flow *flow) +{ + struct adapter *adap =3D et= hdev2adap(flow->dev); + struct port_info *pi =3D ethdev2pinfo(flow->dev); += + /* + * Chelsio filters are programmed in adapter-global tables. DPDK + *= ingress rte_flow rules are created on a specific ethdev, so include + * th= e physical ingress port when the active filter mode supports it. + */ + if = (adap->params.tp.port_shift < 0) + return; + + flow->fs.val.iport =3D pi->= port_id; + flow->fs.mask.iport =3D (1U << IPORT_BITWIDTH) - 1; +} + =C2=A0s= tatic int =C2=A0ch_rte_parsetype_eth(const void *dmask, const struct rte_fl= ow_item *item, =C2=A0 =C2=A0 =C2=A0 struct ch_filter_specification *fs, @@= -986,6 +1004,7 @@ cxgbe_rtef_parse_items(struct rte_flow *flow, =C2=A0 } = =C2=A0 =C2=A0 cxgbe_tweak_filter_spec(adap, &flow->fs); + cxgbe_scope_flow_= to_port(flow); =C2=A0 cxgbe_fill_filter_region(adap, &flow->fs); =C2=A0 =C2= =A0 return 0; --=C2=A0 2.39.5 =20 =20 =C2=A0=20 =20 =20 =20 -----Original Message----- From: Abdulrahman To: dev Cc: bharat ; stable Date: Monday, 27 April 2026 7:30 PM +03 Subject: [PATCH 0/2] net/cxgbe: fix packed Rx handling and flow port scopin= g =20 =20 =20 =20 This series fixes two correctness issues in the cxgbe PMD that can cause traffic loss on Chelsio T6 adapters, especially when rte_flow QUEUE rules concentrate ingress on a small set of RX queues.=20 Patch 1 fixes packed Rx response handling. The current PMD assumes every response descriptor starts a new Free List buffer by requiring F_RSPD_NEWBUF on each response. That assumption does not always hold for packed ingress responses. Under sustained small-packet traffic to a single ingress queue, the FL/IQ state goes out of sync and the affected Rx path stops making forward progress.=20 Patch 2 scopes rte_flow rules to the ingress port they were created on. Chelsio filters are programmed in adapter-wide tables, and the PMD already supports the iport field in the hardware filter spec. However, the flow parser never fills it for normal per-port rules, so a rule created on one port can also match traffic arriving on sibling ports of the same adapter.=20 Both issues reproduce with stock testpmd on T62100-LP-CR. The per-patch commit messages include the details and reproducers.=20 Abdulrahman Alshawi (2): net/cxgbe: fix Rx handling for packed responses net/cxgbe: restrict rte_flow rules to ingress port=20 .mailmap | 1 + drivers/net/cxgbe/base/adapter.h | 1 + drivers/net/cxgbe/cxgbe_flow.c | 19 +++++ drivers/net/cxgbe/sge.c | 122 ++++++++++++++++++++++++------- 4 files changed, 118 insertions(+), 25 deletions(-)=20 -- 2.39.5 =20 =20 --=_a413c8da-d1af-44ec-b80c-b5f3e5ca96d9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable

Chelsio filters are programmed in adapter-wide LE/TCAM tables shared by<= /span>

all ports. rte_flow rules, however, are created on a specific ethdev and=

are expected to apply to traffic arriving on that port.

 

The PMD already supports ingress-port matching in the hardware filter

spec. The iport field is validated, used for hash-region selection when<= /span>

tp.port_shift is available, and emitted in the firmware filter work

request. But the rte_flow parser never sets fs.val.iport/fs.mask.iport

for normal per-port rules.

 

As a result, a rule created on one port is installed as an adapter-wide<= /span>

match and can steer traffic received on sibling ports of the same=

adapter.

 

In practice this causes cross-port steering. For example, a rule like

 

  vlan 100 -> queue 3

 

created on port 0 can also match VLAN 100 traffic arriving on port 1 and=

redirect it into port 0's queue 3.

 

Fix this by stamping the creating ethdev's physical ingress port into

the filter spec before filter placement is decided.

 

Only do this when the active filter mode includes the port field<= /p>

(tp.port_shift >=3D 0). If port matching is not available in the curr= ent

filter mode, keep the existing adapter-wide behavior.

 

Reproduce (two ports of the same adapter bound to DPDK):

 

  dpdk-testpmd -l 1-9 -a 0000:18:00.4 -a 0000:18:00.5 \

          -- --rxq=3D4 --txq=3D4 --forward-mode= =3Drxonly -i

  testpmd> flow create 0 ingress pattern eth \

           / vlan tci is 100 / end actions= queue index 3 / end

  testpmd> start

 

Without this patch, VLAN 100 traffic received on port 1 can be steered

by the rule created on port 0. With the patch, the rule only matches

traffic arriving on port 0.

 

Signed-off-by: Abdulrahman Alshawi <ashawi@wirefilter.com><= /p>

---

 .mailmap                 &= nbsp;     |  1 +

 drivers/net/cxgbe/cxgbe_flow.c | 19 +++++++++++++++++++

 2 files changed, 20 insertions(+)

 

diff --git a/.mailmap b/.mailmap

index 0e0d83e1c6..a6bcbd5756 100644

--- a/.mailmap

+++ b/.mailmap

@@ -4,6 +4,7 @@ Aaro Koskinen <aaro.koskinen@nsn.com>

 Aaron Campbell <aaron@arbor.net>

 Aaron Conole <aconole@redhat.com>

 Abdullah =C3=96mer Yama=C3=A7 <omer.yamac@ceng.metu.edu.tr> = <aomeryamac@gmail.com>

+Abdulrahman Alshawi <ashawi@wirefilter.com>

 Abdullah Sevincer <abdullah.sevincer@intel.com>

 Abed Kamaluddin <akamaluddin@marvell.com>

 Abhijit Gangurde <abhijit.gangurde@amd.com>

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_fl= ow.c

index 14b9b49792..dd0634131e 100644

--- a/drivers/net/cxgbe/cxgbe_flow.c

+++ b/drivers/net/cxgbe/cxgbe_flow.c

@@ -172,6 +172,24 @@ cxgbe_fill_filter_region(struct adapter *adap,

  fs->cap =3D 1; /* use hash region */

 }

 

+static void

+cxgbe_scope_flow_to_port(struct rte_flow *flow)

+{

+ struct adapter *adap =3D ethd= ev2adap(flow->dev);

+ struct port_info *pi =3D ethd= ev2pinfo(flow->dev);

+

+ /*

+ * Chelsio filters are program= med in adapter-global tables. DPDK

+ * ingress rte_flow rules are = created on a specific ethdev, so include

+ * the physical ingress port w= hen the active filter mode supports it.

+ */

+ if (adap->params.tp.port_s= hift < 0)

+ return;

+

+ flow->fs.val.iport =3D pi-= >port_id;

+ flow->fs.mask.iport =3D (1= U << IPORT_BITWIDTH) - 1;

+}

+

 static int

 ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item= *item,

      struct ch_filter_specification *fs,<= /p>

@@ -986,6 +1004,7 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,

  }

 

  cxgbe_tweak_filter_spec(= adap, &flow->fs);

+ cxgbe_scope_flow_to_port(flow= );

  cxgbe_fill_filter_region= (adap, &flow->fs);

 

  return 0;

-- 

2.39.5

 

From: Abdul= rahman <ashawi@wirefilter.com>
To: dev &l= t;dev@dpdk.org>
Cc: bharat <bharat@chelsi= o.com>; stable <stable@dpdk.org>
Date: Monday, 27 April 2026 7:30 PM +03
Subject: [PA= TCH 0/2] net/cxgbe: fix packed Rx handling and flow port scoping
=

This series fixes two correctness issues in the cxgbe PMD that can caus= e
traffic loss on Chelsio T6 adapters, especially when rte_flow = QUEUE
rules concentrate ingress on a small set of RX queues.

Patch 1 fixes packed Rx response handling. The current PMD assumes ever= y
response descriptor starts a new Free List buffer by requiring=
F_RSPD_NEWBUF on each response. That assumption does not always= hold for
packed ingress responses. Under sustained small-packet= traffic to a
single ingress queue, the FL/IQ state goes out of = sync and the affected
Rx path stops making forward progress.

Patch 2 scopes rte_flow rules to the ingress port they were created on.=
Chelsio filters are programmed in adapter-wide tables, and the = PMD
already supports the iport field in the hardware filter spec= . However,
the flow parser never fills it for normal per-port ru= les, so a rule
created on one port can also match traffic arrivi= ng on sibling ports of
the same adapter.

Both issues reproduce with stock testpmd on T62100-LP-CR. The per-patch=
commit messages include the details and reproducers.

Abdulrahman Alshawi (2):
net/cxgbe: fix Rx handling for pack= ed responses
net/cxgbe: restrict rte_flow rules to ingress port<= /span>

.mailmap | 1 +
drivers/net/cxgbe/base/adapter.h | 1 +=
drivers/net/cxgbe/cxgbe_flow.c | 19 +++++
drivers/net/cxgbe= /sge.c | 122 ++++++++++++++++++++++++-------
4 files changed, 11= 8 insertions(+), 25 deletions(-)

--
2.39.5

--=_a413c8da-d1af-44ec-b80c-b5f3e5ca96d9--