public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm
@ 2026-03-21  4:10 Tyllis Xu
  2026-03-23 14:18 ` Simon Horman
  0 siblings, 1 reply; 4+ messages in thread
From: Tyllis Xu @ 2026-03-21  4:10 UTC (permalink / raw)
  To: netdev
  Cc: linux-kernel, andrew+netdev, davem, edumazet, kuba, pabeni,
	rmk+kernel, maxime.chevallier, peppe.cavallaro, rayagond, stable,
	danisjiang, ychen, Tyllis Xu

The jumbo_frm() chain-mode implementation unconditionally computes

    len = nopaged_len - bmax;

where nopaged_len = skb_headlen(skb) (linear bytes only) and bmax is
BUF_SIZE_8KiB or BUF_SIZE_2KiB.  However, the caller stmmac_xmit()
decides to invoke jumbo_frm() based on skb->len (total length including
page fragments):

    is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc);

When a packet has a small linear portion (nopaged_len <= bmax) but a
large total length due to page fragments (skb->len > bmax), the
subtraction wraps as an unsigned integer, producing a huge len value
(~0xFFFFxxxx).  This causes the while (len != 0) loop to execute
hundreds of thousands of iterations, passing skb->data + bmax * i
pointers far beyond the skb buffer to dma_map_single().  On IOMMU-less
SoCs (the typical deployment for stmmac), this maps arbitrary kernel
memory to the DMA engine, constituting a kernel memory disclosure and
potential memory corruption from hardware.

The ring-mode counterpart already guards against this with:

    if (nopaged_len > BUF_SIZE_8KiB) { ... use len ... }
    else { ... map nopaged_len directly ... }

Apply the same pattern to chain mode: guard the chunked-DMA path with
if (nopaged_len > bmax), and add an else branch that maps the entire
linear portion as a single descriptor when it fits within bmax.  The
fragment loop in stmmac_xmit() handles page fragments afterward.

Fixes: 286a83721720 ("stmmac: add CHAINED descriptor mode support (V4)")
Cc: stable@vger.kernel.org
Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
---
 drivers/net/ethernet/stmicro/stmmac/chain_mode.c | 71 ++++++++++++++---------
 1 file changed, 44 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
index bf351bbec57f..c8980482dea2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
+++ b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
@@ -31,52 +31,65 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
 	else
 		bmax = BUF_SIZE_2KiB;

-	len = nopaged_len - bmax;
-
-	des2 = dma_map_single(priv->device, skb->data,
-			      bmax, DMA_TO_DEVICE);
-	desc->des2 = cpu_to_le32(des2);
-	if (dma_mapping_error(priv->device, des2))
-		return -1;
-	tx_q->tx_skbuff_dma[entry].buf = des2;
-	tx_q->tx_skbuff_dma[entry].len = bmax;
-	/* do not close the descriptor and do not set own bit */
-	stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum, STMMAC_CHAIN_MODE,
-			0, false, skb->len);
-
-	while (len != 0) {
-		tx_q->tx_skbuff[entry] = NULL;
-		entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size);
-		desc = tx_q->dma_tx + entry;
-
-		if (len > bmax) {
-			des2 = dma_map_single(priv->device,
-					      (skb->data + bmax * i),
-					      bmax, DMA_TO_DEVICE);
-			desc->des2 = cpu_to_le32(des2);
-			if (dma_mapping_error(priv->device, des2))
-				return -1;
-			tx_q->tx_skbuff_dma[entry].buf = des2;
-			tx_q->tx_skbuff_dma[entry].len = bmax;
-			stmmac_prepare_tx_desc(priv, desc, 0, bmax, csum,
-					STMMAC_CHAIN_MODE, 1, false, skb->len);
-			len -= bmax;
-			i++;
-		} else {
-			des2 = dma_map_single(priv->device,
-					      (skb->data + bmax * i), len,
-					      DMA_TO_DEVICE);
-			desc->des2 = cpu_to_le32(des2);
-			if (dma_mapping_error(priv->device, des2))
-				return -1;
-			tx_q->tx_skbuff_dma[entry].buf = des2;
-			tx_q->tx_skbuff_dma[entry].len = len;
-			/* last descriptor can be set now */
-			stmmac_prepare_tx_desc(priv, desc, 0, len, csum,
-					STMMAC_CHAIN_MODE, 1, true, skb->len);
-			len = 0;
+	if (nopaged_len > bmax) {
+		len = nopaged_len - bmax;
+
+		des2 = dma_map_single(priv->device, skb->data,
+				      bmax, DMA_TO_DEVICE);
+		desc->des2 = cpu_to_le32(des2);
+		if (dma_mapping_error(priv->device, des2))
+			return -1;
+		tx_q->tx_skbuff_dma[entry].buf = des2;
+		tx_q->tx_skbuff_dma[entry].len = bmax;
+		/* do not close the descriptor and do not set own bit */
+		stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum, STMMAC_CHAIN_MODE,
+				0, false, skb->len);
+
+		while (len != 0) {
+			tx_q->tx_skbuff[entry] = NULL;
+			entry = STMMAC_GET_ENTRY(entry, priv->dma_conf.dma_tx_size);
+			desc = tx_q->dma_tx + entry;
+
+			if (len > bmax) {
+				des2 = dma_map_single(priv->device,
+						      (skb->data + bmax * i),
+						      bmax, DMA_TO_DEVICE);
+				desc->des2 = cpu_to_le32(des2);
+				if (dma_mapping_error(priv->device, des2))
+					return -1;
+				tx_q->tx_skbuff_dma[entry].buf = des2;
+				tx_q->tx_skbuff_dma[entry].len = bmax;
+				stmmac_prepare_tx_desc(priv, desc, 0, bmax, csum,
+						STMMAC_CHAIN_MODE, 1, false, skb->len);
+				len -= bmax;
+				i++;
+			} else {
+				des2 = dma_map_single(priv->device,
+						      (skb->data + bmax * i), len,
+						      DMA_TO_DEVICE);
+				desc->des2 = cpu_to_le32(des2);
+				if (dma_mapping_error(priv->device, des2))
+					return -1;
+				tx_q->tx_skbuff_dma[entry].buf = des2;
+				tx_q->tx_skbuff_dma[entry].len = len;
+				/* last descriptor can be set now */
+				stmmac_prepare_tx_desc(priv, desc, 0, len, csum,
+						STMMAC_CHAIN_MODE, 1, true, skb->len);
+				len = 0;
+			}
 		}
-	}
+	} else {
+		des2 = dma_map_single(priv->device, skb->data,
+				      nopaged_len, DMA_TO_DEVICE);
+		desc->des2 = cpu_to_le32(des2);
+		if (dma_mapping_error(priv->device, des2))
+			return -1;
+		tx_q->tx_skbuff_dma[entry].buf = des2;
+		tx_q->tx_skbuff_dma[entry].len = nopaged_len;
+		stmmac_prepare_tx_desc(priv, desc, 1, nopaged_len, csum,
+				STMMAC_CHAIN_MODE, 0, !skb_is_nonlinear(skb),
+				skb->len);
+	}

 	tx_q->cur_tx = entry;

--
2.39.5

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm
  2026-03-21  4:10 [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm Tyllis Xu
@ 2026-03-23 14:18 ` Simon Horman
  2026-03-24  6:07   ` Tyllis Xu
  0 siblings, 1 reply; 4+ messages in thread
From: Simon Horman @ 2026-03-23 14:18 UTC (permalink / raw)
  To: Tyllis Xu
  Cc: netdev, linux-kernel, andrew+netdev, davem, edumazet, kuba,
	pabeni, rmk+kernel, maxime.chevallier, peppe.cavallaro, rayagond,
	stable, danisjiang, ychen

On Fri, Mar 20, 2026 at 11:10:58PM -0500, Tyllis Xu wrote:
> The jumbo_frm() chain-mode implementation unconditionally computes
> 
>     len = nopaged_len - bmax;
> 
> where nopaged_len = skb_headlen(skb) (linear bytes only) and bmax is
> BUF_SIZE_8KiB or BUF_SIZE_2KiB.  However, the caller stmmac_xmit()
> decides to invoke jumbo_frm() based on skb->len (total length including
> page fragments):
> 
>     is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc);
> 
> When a packet has a small linear portion (nopaged_len <= bmax) but a
> large total length due to page fragments (skb->len > bmax), the
> subtraction wraps as an unsigned integer, producing a huge len value
> (~0xFFFFxxxx).  This causes the while (len != 0) loop to execute
> hundreds of thousands of iterations, passing skb->data + bmax * i
> pointers far beyond the skb buffer to dma_map_single().  On IOMMU-less
> SoCs (the typical deployment for stmmac), this maps arbitrary kernel
> memory to the DMA engine, constituting a kernel memory disclosure and
> potential memory corruption from hardware.
> 
> The ring-mode counterpart already guards against this with:
> 
>     if (nopaged_len > BUF_SIZE_8KiB) { ... use len ... }
>     else { ... map nopaged_len directly ... }
> 
> Apply the same pattern to chain mode: guard the chunked-DMA path with
> if (nopaged_len > bmax), and add an else branch that maps the entire
> linear portion as a single descriptor when it fits within bmax.  The
> fragment loop in stmmac_xmit() handles page fragments afterward.
> 
> Fixes: 286a83721720 ("stmmac: add CHAINED descriptor mode support (V4)")
> Cc: stable@vger.kernel.org
> Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>

As a fix for code present in net this patch should be targeted at the net
tree like this:

Subject: [PATCH net] net: stmmac: fix integer underflow in chain mode

As is, our CI tries to apply this patch to the default tree, net-next.
Which fails due to a conflict with commit 6b4286e05508 ("net: stmmac:
rename STMMAC_GET_ENTRY() -> STMMAC_NEXT_ENTRY()"). So no CI tests were
run.

> ---
>  drivers/net/ethernet/stmicro/stmmac/chain_mode.c | 71 ++++++++++++++---------
>  1 file changed, 44 insertions(+), 27 deletions(-)

The bulk of this patch is whitespace change (indentation).
So seems useful to examine this patch with whitespace changes ignored.

git diff -w yeilds;

diff --git a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
index 120a009c9992..c8980482dea2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
+++ b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
@@ -31,6 +31,7 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
 	else
 		bmax = BUF_SIZE_2KiB;
 
+	if (nopaged_len > bmax) {
 		len = nopaged_len - bmax;
 
 		des2 = dma_map_single(priv->device, skb->data,
@@ -77,6 +78,18 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
 				len = 0;
 			}
 		}
+	} else {
+		des2 = dma_map_single(priv->device, skb->data,
+				      nopaged_len, DMA_TO_DEVICE);
+		desc->des2 = cpu_to_le32(des2);
+		if (dma_mapping_error(priv->device, des2))
+			return -1;
+		tx_q->tx_skbuff_dma[entry].buf = des2;
+		tx_q->tx_skbuff_dma[entry].len = nopaged_len;
+		stmmac_prepare_tx_desc(priv, desc, 1, nopaged_len, csum,
+				STMMAC_CHAIN_MODE, 0, !skb_is_nonlinear(skb),
+				skb->len);
+	}
 
 	tx_q->cur_tx = entry;

The code in the else arm of the new condition is quite similar to
the (not visible in the diff above) code at the top of the non-else
arm of the condition.

I do see this is consistent with the ring-mode code.  So perhaps it is
appropriate as a fix. But I do wonder if this could be consolidated - e.g.
by setting up some local variables rather than moving the mapping logic
into a condition.

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm
  2026-03-23 14:18 ` Simon Horman
@ 2026-03-24  6:07   ` Tyllis Xu
  2026-03-25 17:09     ` Russell King (Oracle)
  0 siblings, 1 reply; 4+ messages in thread
From: Tyllis Xu @ 2026-03-24  6:07 UTC (permalink / raw)
  To: Simon Horman
  Cc: netdev, linux-kernel, andrew+netdev, davem, edumazet, kuba,
	pabeni, rmk+kernel, maxime.chevallier, peppe.cavallaro, rayagond,
	stable, danisjiang, ychen

I will try to change the code to consolidate with the ring-mode
code, avoid whitespace diff, and resubmit the patch with a new
correct subject line. Thank you for your feedback!

On Mon, Mar 23, 2026 at 9:18 AM Simon Horman <horms@kernel.org> wrote:
>
> On Fri, Mar 20, 2026 at 11:10:58PM -0500, Tyllis Xu wrote:
> > The jumbo_frm() chain-mode implementation unconditionally computes
> >
> >     len = nopaged_len - bmax;
> >
> > where nopaged_len = skb_headlen(skb) (linear bytes only) and bmax is
> > BUF_SIZE_8KiB or BUF_SIZE_2KiB.  However, the caller stmmac_xmit()
> > decides to invoke jumbo_frm() based on skb->len (total length including
> > page fragments):
> >
> >     is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc);
> >
> > When a packet has a small linear portion (nopaged_len <= bmax) but a
> > large total length due to page fragments (skb->len > bmax), the
> > subtraction wraps as an unsigned integer, producing a huge len value
> > (~0xFFFFxxxx).  This causes the while (len != 0) loop to execute
> > hundreds of thousands of iterations, passing skb->data + bmax * i
> > pointers far beyond the skb buffer to dma_map_single().  On IOMMU-less
> > SoCs (the typical deployment for stmmac), this maps arbitrary kernel
> > memory to the DMA engine, constituting a kernel memory disclosure and
> > potential memory corruption from hardware.
> >
> > The ring-mode counterpart already guards against this with:
> >
> >     if (nopaged_len > BUF_SIZE_8KiB) { ... use len ... }
> >     else { ... map nopaged_len directly ... }
> >
> > Apply the same pattern to chain mode: guard the chunked-DMA path with
> > if (nopaged_len > bmax), and add an else branch that maps the entire
> > linear portion as a single descriptor when it fits within bmax.  The
> > fragment loop in stmmac_xmit() handles page fragments afterward.
> >
> > Fixes: 286a83721720 ("stmmac: add CHAINED descriptor mode support (V4)")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
>
> As a fix for code present in net this patch should be targeted at the net
> tree like this:
>
> Subject: [PATCH net] net: stmmac: fix integer underflow in chain mode
>
> As is, our CI tries to apply this patch to the default tree, net-next.
> Which fails due to a conflict with commit 6b4286e05508 ("net: stmmac:
> rename STMMAC_GET_ENTRY() -> STMMAC_NEXT_ENTRY()"). So no CI tests were
> run.
>
> > ---
> >  drivers/net/ethernet/stmicro/stmmac/chain_mode.c | 71 ++++++++++++++---------
> >  1 file changed, 44 insertions(+), 27 deletions(-)
>
> The bulk of this patch is whitespace change (indentation).
> So seems useful to examine this patch with whitespace changes ignored.
>
> git diff -w yeilds;
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
> index 120a009c9992..c8980482dea2 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/chain_mode.c
> @@ -31,6 +31,7 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
>         else
>                 bmax = BUF_SIZE_2KiB;
>
> +       if (nopaged_len > bmax) {
>                 len = nopaged_len - bmax;
>
>                 des2 = dma_map_single(priv->device, skb->data,
> @@ -77,6 +78,18 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
>                                 len = 0;
>                         }
>                 }
> +       } else {
> +               des2 = dma_map_single(priv->device, skb->data,
> +                                     nopaged_len, DMA_TO_DEVICE);
> +               desc->des2 = cpu_to_le32(des2);
> +               if (dma_mapping_error(priv->device, des2))
> +                       return -1;
> +               tx_q->tx_skbuff_dma[entry].buf = des2;
> +               tx_q->tx_skbuff_dma[entry].len = nopaged_len;
> +               stmmac_prepare_tx_desc(priv, desc, 1, nopaged_len, csum,
> +                               STMMAC_CHAIN_MODE, 0, !skb_is_nonlinear(skb),
> +                               skb->len);
> +       }
>
>         tx_q->cur_tx = entry;
>
> The code in the else arm of the new condition is quite similar to
> the (not visible in the diff above) code at the top of the non-else
> arm of the condition.
>
> I do see this is consistent with the ring-mode code.  So perhaps it is
> appropriate as a fix. But I do wonder if this could be consolidated - e.g.
> by setting up some local variables rather than moving the mapping logic
> into a condition.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm
  2026-03-24  6:07   ` Tyllis Xu
@ 2026-03-25 17:09     ` Russell King (Oracle)
  0 siblings, 0 replies; 4+ messages in thread
From: Russell King (Oracle) @ 2026-03-25 17:09 UTC (permalink / raw)
  To: Tyllis Xu
  Cc: Simon Horman, netdev, linux-kernel, andrew+netdev, davem,
	edumazet, kuba, pabeni, maxime.chevallier, peppe.cavallaro,
	rayagond, stable, danisjiang, ychen

On Tue, Mar 24, 2026 at 01:07:47AM -0500, Tyllis Xu wrote:
> I will try to change the code to consolidate with the ring-mode
> code, avoid whitespace diff, and resubmit the patch with a new
> correct subject line. Thank you for your feedback!

I mentioned the possibilities of other problems. How about this:

static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
{
        unsigned int nopaged_len = skb_headlen(skb);
...
        enh_desc = priv->plat->enh_desc;
        /* To program the descriptors according to the size of the frame */
        if (enh_desc)
                is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc);

1. enh_desc is set if priv->dma_cap.enh_desc is set and DMA capabilities
   exist (dmwac1000 and more recent.)

2. in chain mode, is_jumbo will be true if skb->len > 8KiB.
   in ring mode, is_jumbo will be true if skb->len >= 4KiB
   (note the >= vs > there which is suspicious.)

Let's consider the case where is_jumbo is false. So this could be a
packet with skb->len up to 4KiB-1 or 8KiB.

        if (unlikely(is_jumbo)) {
        } else {
                dma_addr = dma_map_single(priv->device, skb->data,
                                          nopaged_len, DMA_TO_DEVICE);

                stmmac_set_desc_addr(priv, first_desc, dma_addr);

                /* Prepare the first descriptor without setting the OWN bit */
                stmmac_prepare_tx_desc(priv, first_desc, 1, nopaged_len,
                                       csum_insertion, priv->descriptor_mode,
                                       0, last_segment, skb->len);

This can call one of several functions.

ndesc_prepare_tx_desc():
        if (descriptor_mode == STMMAC_CHAIN_MODE)
                norm_set_tx_desc_len_on_chain(p, len);
        else
                norm_set_tx_desc_len_on_ring(p, len);

static inline void norm_set_tx_desc_len_on_chain(struct dma_desc *p, int len)
{
        p->des1 |= cpu_to_le32(len & TDES1_BUFFER1_SIZE_MASK);
}

#define TDES1_BUFFER1_SIZE_MASK         GENMASK(10, 0)

So, this masks the length with 0x7ff, meaning this can represent
buffers up to 2047 bytes _max_ for normal descriptors in chain
mode.

static inline void norm_set_tx_desc_len_on_ring(struct dma_desc *p, int len)
{
        unsigned int buffer1_max_length = BUF_SIZE_2KiB - 1;

        if (unlikely(len > buffer1_max_length)) {
                p->des1 |= cpu_to_le32(FIELD_PREP(TDES1_BUFFER2_SIZE_MASK,
                                                  len - buffer1_max_length) |
                                       FIELD_PREP(TDES1_BUFFER1_SIZE_MASK,
                                                  buffer1_max_length));
        } else {
                p->des1 |= cpu_to_le32(FIELD_PREP(TDES1_BUFFER1_SIZE_MASK,
                                                  len));
        }
}

#define TDES1_BUFFER2_SIZE_MASK         GENMASK(21, 11)

This works around the 2KiB limitation, and fills buffer 1 with
2047 bytes and buffer 2 with the remainder... but there is _no_ code
that writes buffer 2's address in this case. This means we will
transmit garbage - and unknowingly of transmit COE is enabled (because
the checksums will be based on the data the NIC read from memory.)

However, normal descriptors in ring mode can correctly transmit up
to 2047 bytes correctly, or garbage after that up to 4094 bytes.

Now, this in the probe function:

                ndev->max_mtu = SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN);

equates to:

   PAGE_SIZE - NET_SKB_PAD - NET_IP_ALIGN - aligned skb_shared_info.

Notice that this doesn't limit by what the hardware can actually do.

Consider the implications where PAGE_SIZE is 16KiB or 64KiB and the
stmmac hardware only supports normal descriptors. max_mtu gets set
to something close to PAGE_SIZE. Even for 4KiB, it'll be close to that.
If skb_headlen(skb) can approach max_mtu, then we have the very real
possibility of overflowing the first descriptor - which in normal
mode can only really handle up to 2047 bytes.

This has been on my list of issues to try and sort out at some point,
at the moment I don't have any patches, but I'm instead trying to
clean up the code to make it more understandable so I can start
thinking about possible solutions. The code here has been a total
trainwreck, and some of those cleanups have recently gone in to
net-next.

I think the driver has only really been tested with the standard
1500 byte MTU, and maybe briefly with the 9KiB jumbo MTU, but honestly
I feel like saying that the transmit paths need thrown away and totally
rewritten - including that absolutely wrong max_mtu setting in the
probe function.

However, the problem is... I don't have hardware that uses normal nor
enhanced descriptors - my hardware is a nVidia Xavier, which is a
dwmac4/5 implementation using a different descriptor format, so I
can only make changes through sets of simple transformations.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-03-25 17:09 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-21  4:10 [PATCH] net: stmmac: fix integer underflow in chain mode jumbo_frm Tyllis Xu
2026-03-23 14:18 ` Simon Horman
2026-03-24  6:07   ` Tyllis Xu
2026-03-25 17:09     ` Russell King (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox