public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Mingi Cho <mgcho.minic@gmail.com>
To: Cong Wang <xiyou.wangcong@gmail.com>
Cc: netdev@vger.kernel.org, jhs@mojatatu.com, jiri@resnulli.us,
	mincho@theori.io, victor@mojatatu.com
Subject: Re: [RFC Patch net-next 0/2] net_sched: Move GSO segmentation to root qdisc
Date: Thu, 12 Mar 2026 09:57:57 -0700	[thread overview]
Message-ID: <20260312165757.GA3411905@mingi> (raw)
In-Reply-To: <20250701232915.377351-1-xiyou.wangcong@gmail.com>

On Tue, Jul 01, 2025 at 04:29:13PM -0700, Cong Wang wrote:
> This patchset attempts to move the GSO segmentation in Qdisc layer from
> child qdisc up to root qdisc. It fixes the complex handling of GSO
> segmentation logic and unifies the code in a generic way. The end result
> is cleaner (see the patch stat) and hopefully keeps the original logic
> of handling GSO.
> 
> This is an architectural change, hence I am sending it as an RFC. Please
> check each patch description for more details. Also note that although
> this patchset alone could fix the UAF reported by Mingi, the original
> UAF can also be fixed by Lion's patch [1], so this patchset is just an
> improvement for handling GSO segmentation.
> 
> TODO: Add some selftests.
> 
> 1. https://lore.kernel.org/netdev/d912cbd7-193b-4269-9857-525bee8bbb6a@gmail.com/
> 
> ---
> Cong Wang (2):
>   net_sched: Move GSO segmentation to root qdisc
>   net_sched: Propagate per-qdisc max_segment_size for GSO segmentation
> 
>  include/net/sch_generic.h |  4 +-
>  net/core/dev.c            | 52 +++++++++++++++++++---
>  net/sched/sch_api.c       | 14 ++++++
>  net/sched/sch_cake.c      | 93 +++++++++++++--------------------------
>  net/sched/sch_netem.c     | 32 +-------------
>  net/sched/sch_taprio.c    | 76 +++++++-------------------------
>  net/sched/sch_tbf.c       | 59 +++++--------------------
>  7 files changed, 123 insertions(+), 207 deletions(-)
> 
> -- 
> 2.34.1
> 

Hi Cong,

I tested the proposed patch and found that the reported bug was fixed. A qlen mismatch between Qdiscs can potentially cause UAF, so I believe this patch needs to be applied.

When executing the PoC on the latest kernel without the patch applied, a warning message occurs in drr_dequeue() as shown below.

Before applying the patch:

root@test:~# ./poc
qdisc drr 1: dev lo root refcnt 2
qdisc tbf 2: dev lo parent 1:1 rate 1Mbit burst 1514b lat 50.0ms
qdisc choke 3: dev lo parent 2:1 limit 2p min 1p max 2p
[    7.588847] drr_dequeue: tbf qdisc 2: is non-work-conserving?

Testing after applying the patch to the v6.17 kernel shows that the warning message has disappeared.

After applying the patch:

root@test:~# ./poc
qdisc drr 1: dev lo root refcnt 2
qdisc tbf 2: dev lo parent 1:1 rate 1Mbit burst 1514b lat 50.0ms
qdisc choke 3: dev lo parent 2:1 limit 2p min 1p max 2p

I tested the patch and found that the bug was resolved, but I'm not sure if any other side effects occur.

The PoC used for testing is as follows.

#define _GNU_SOURCE

#include <arpa/inet.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <linux/udp.h>

#ifndef SOL_UDP
#define SOL_UDP 17 // UDP protocol value for setsockopt
#endif

void loopback_send (uint64_t size) {
    struct sockaddr iaddr = { AF_INET };
    char data[0x1000] = {0,};

    int inet_sock_fd = socket(PF_INET, SOCK_DGRAM, 0);

    int gso_size = 1300;

    setsockopt(inet_sock_fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size));

    connect(inet_sock_fd, &iaddr, sizeof(iaddr));

    write(inet_sock_fd, data, size);

    close(inet_sock_fd);
}

int main(int argc, char **argv) {
    system("ip link set dev lo up");
    system("ip link set dev lo mtu 1500");

    system("tc qdisc add dev lo root handle 1: drr");
    system("tc filter add dev lo parent 1: basic classid 1:1");
    system("tc class add dev lo parent 1: classid 1:1 drr");
    system("tc class add dev lo parent 1: classid 1:2 drr");

    system("tc qdisc add dev lo parent 1:1 handle 2: tbf rate 1Mbit
burst 1514 latency 50ms");

    system("tc qdisc add dev lo parent 2:1 handle 3: choke limit 2
bandwidth 1kbit min 1 max 2 burst 1");

    system("tc qdisc show");

    loopback_send(4000);

    system("tc class del dev lo classid 1:1");

    system("timeout 0.1 ping -c 1 -W0.01 localhost > /dev/null");
}

Thanks,
Mingi

  parent reply	other threads:[~2026-03-12 16:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-01 23:29 [RFC Patch net-next 0/2] net_sched: Move GSO segmentation to root qdisc Cong Wang
2025-07-01 23:29 ` [RFC Patch net-next 1/2] " Cong Wang
2025-07-01 23:29 ` [RFC Patch net-next 2/2] net_sched: Propagate per-qdisc max_segment_size for GSO segmentation Cong Wang
2026-03-12 16:57 ` Mingi Cho [this message]
2026-03-12 19:55   ` [RFC Patch net-next 0/2] net_sched: Move GSO segmentation to root qdisc Cong Wang
2026-03-12 20:21   ` Jamal Hadi Salim
2026-03-13 13:38     ` Mingi Cho
2026-03-13 19:23       ` Jamal Hadi Salim
2026-03-26 10:29         ` Mingi Cho

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260312165757.GA3411905@mingi \
    --to=mgcho.minic@gmail.com \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=mincho@theori.io \
    --cc=netdev@vger.kernel.org \
    --cc=victor@mojatatu.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox