netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 4.14,5.4] net/sched: sch_qfq: account for stab overhead in qfq_enqueue
@ 2023-07-27 20:41 Shaoying Xu
  2023-08-01  8:22 ` Greg KH
  0 siblings, 1 reply; 2+ messages in thread
From: Shaoying Xu @ 2023-07-27 20:41 UTC (permalink / raw)
  To: stable
  Cc: netdev, Shaoying Xu, Lion, Eric Dumazet, Jamal Hadi Salim,
	Pedro Tammela, Simon Horman, Paolo Abeni

[ Upstream commit 3e337087c3b5805fe0b8a46ba622a962880b5d64 ]

Lion says:
-------
In the QFQ scheduler a similar issue to CVE-2023-31436
persists.

Consider the following code in net/sched/sch_qfq.c:

static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                struct sk_buff **to_free)
{
     unsigned int len = qdisc_pkt_len(skb), gso_segs;

    // ...

     if (unlikely(cl->agg->lmax < len)) {
         pr_debug("qfq: increasing maxpkt from %u to %u for class %u",
              cl->agg->lmax, len, cl->common.classid);
         err = qfq_change_agg(sch, cl, cl->agg->class_weight, len);
         if (err) {
             cl->qstats.drops++;
             return qdisc_drop(skb, sch, to_free);
         }

    // ...

     }

Similarly to CVE-2023-31436, "lmax" is increased without any bounds
checks according to the packet length "len". Usually this would not
impose a problem because packet sizes are naturally limited.

This is however not the actual packet length, rather the
"qdisc_pkt_len(skb)" which might apply size transformations according to
"struct qdisc_size_table" as created by "qdisc_get_stab()" in
net/sched/sch_api.c if the TCA_STAB option was set when modifying the qdisc.

A user may choose virtually any size using such a table.

As a result the same issue as in CVE-2023-31436 can occur, allowing heap
out-of-bounds read / writes in the kmalloc-8192 cache.
-------

We can create the issue with the following commands:

tc qdisc add dev $DEV root handle 1: stab mtu 2048 tsize 512 mpu 0 \
overhead 999999999 linklayer ethernet qfq
tc class add dev $DEV parent 1: classid 1:1 htb rate 6mbit burst 15k
tc filter add dev $DEV parent 1: matchall classid 1:1
ping -I $DEV 1.1.1.2

This is caused by incorrectly assuming that qdisc_pkt_len() returns a
length within the QFQ_MIN_LMAX < len < QFQ_MAX_LMAX.

Fixes: 462dbc9101ac ("pkt_sched: QFQ Plus: fair-queueing service at DRR cost")
Reported-by: Lion <nnamrec@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
[Backport patch for stable kernels 4.14 and 5.4. Since QFQ_MAX_LMAX is not 
defined, replace it with 1UL << QFQ_MTU_SHIFT.]
Cc: <stable@vger.kernel.org> # 4.14, 5.4
Signed-off-by: Shaoying Xu <shaoyi@amazon.com>

---
 net/sched/sch_qfq.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index 603bd3097bd8..34a54dcd95f2 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -375,8 +375,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight,
 			   u32 lmax)
 {
 	struct qfq_sched *q = qdisc_priv(sch);
-	struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight);
+	struct qfq_aggregate *new_agg;
 
+	/* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */
+	if (lmax > (1UL << QFQ_MTU_SHIFT))
+		return -EINVAL;
+
+	new_agg = qfq_find_agg(q, lmax, weight);
 	if (new_agg == NULL) { /* create new aggregate */
 		new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC);
 		if (new_agg == NULL)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH 4.14,5.4] net/sched: sch_qfq: account for stab overhead in qfq_enqueue
  2023-07-27 20:41 [PATCH 4.14,5.4] net/sched: sch_qfq: account for stab overhead in qfq_enqueue Shaoying Xu
@ 2023-08-01  8:22 ` Greg KH
  0 siblings, 0 replies; 2+ messages in thread
From: Greg KH @ 2023-08-01  8:22 UTC (permalink / raw)
  To: Shaoying Xu
  Cc: stable, netdev, Lion, Eric Dumazet, Jamal Hadi Salim,
	Pedro Tammela, Simon Horman, Paolo Abeni

On Thu, Jul 27, 2023 at 08:41:49PM +0000, Shaoying Xu wrote:
> [ Upstream commit 3e337087c3b5805fe0b8a46ba622a962880b5d64 ]
> 
> Lion says:
> -------
> In the QFQ scheduler a similar issue to CVE-2023-31436
> persists.
> 
> Consider the following code in net/sched/sch_qfq.c:
> 
> static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
>                 struct sk_buff **to_free)
> {
>      unsigned int len = qdisc_pkt_len(skb), gso_segs;
> 
>     // ...
> 
>      if (unlikely(cl->agg->lmax < len)) {
>          pr_debug("qfq: increasing maxpkt from %u to %u for class %u",
>               cl->agg->lmax, len, cl->common.classid);
>          err = qfq_change_agg(sch, cl, cl->agg->class_weight, len);
>          if (err) {
>              cl->qstats.drops++;
>              return qdisc_drop(skb, sch, to_free);
>          }
> 
>     // ...
> 
>      }
> 
> Similarly to CVE-2023-31436, "lmax" is increased without any bounds
> checks according to the packet length "len". Usually this would not
> impose a problem because packet sizes are naturally limited.
> 
> This is however not the actual packet length, rather the
> "qdisc_pkt_len(skb)" which might apply size transformations according to
> "struct qdisc_size_table" as created by "qdisc_get_stab()" in
> net/sched/sch_api.c if the TCA_STAB option was set when modifying the qdisc.
> 
> A user may choose virtually any size using such a table.
> 
> As a result the same issue as in CVE-2023-31436 can occur, allowing heap
> out-of-bounds read / writes in the kmalloc-8192 cache.
> -------
> 
> We can create the issue with the following commands:
> 
> tc qdisc add dev $DEV root handle 1: stab mtu 2048 tsize 512 mpu 0 \
> overhead 999999999 linklayer ethernet qfq
> tc class add dev $DEV parent 1: classid 1:1 htb rate 6mbit burst 15k
> tc filter add dev $DEV parent 1: matchall classid 1:1
> ping -I $DEV 1.1.1.2
> 
> This is caused by incorrectly assuming that qdisc_pkt_len() returns a
> length within the QFQ_MIN_LMAX < len < QFQ_MAX_LMAX.
> 
> Fixes: 462dbc9101ac ("pkt_sched: QFQ Plus: fair-queueing service at DRR cost")
> Reported-by: Lion <nnamrec@gmail.com>
> Reviewed-by: Eric Dumazet <edumazet@google.com>
> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> Reviewed-by: Simon Horman <simon.horman@corigine.com>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> [Backport patch for stable kernels 4.14 and 5.4. Since QFQ_MAX_LMAX is not 
> defined, replace it with 1UL << QFQ_MTU_SHIFT.]
> Cc: <stable@vger.kernel.org> # 4.14, 5.4
> Signed-off-by: Shaoying Xu <shaoyi@amazon.com>
> 
> ---
>  net/sched/sch_qfq.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)

Now queued up, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-08-01  8:22 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-27 20:41 [PATCH 4.14,5.4] net/sched: sch_qfq: account for stab overhead in qfq_enqueue Shaoying Xu
2023-08-01  8:22 ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).