public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCHv2 0/1] idpf: IDPF + SWIOTLB Bug
@ 2026-02-27 20:34 Steve Rutherford
  2026-02-27 20:34 ` [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled Steve Rutherford
  0 siblings, 1 reply; 12+ messages in thread
From: Steve Rutherford @ 2026-02-27 20:34 UTC (permalink / raw)
  To: Tony Nguyen, Przemek Kitszel, aleksander.lobakin, David S. Miller,
	Jakub Kicinski, Eric Dumazet, intel-wired-lan
  Cc: netdev, linux-kernel, David Decotigny, Anjali Singhai,
	Sridhar Samudrala, Brian Vazquez, Li Li, emil.s.tantilov,
	Steve Rutherford

Found an issue with the IDPF driver when SWIOTLB is enabled. The issue
results in empty headers for packets that hit the split queue workaround
path. It's caused by a spurious sync in that path. The header is synced
from the SWIOTLB even when the header was shoved into the payload.

I cooked up a sample patch, but I'm not an expert in this driver, so I have
no idea if it's the right solution. It did allow my QEMU VM to boot with a
superficially functional passed-through IDPF NIC and SWIOTLB=force.

The patch was written against COS's 6.12, so I assume that it will not
apply cleanly elsewhere, but I figured a wrong sample patch was better than
a long paragraph describing the same thing. My read of more recent kernels
is that this problem is still present, but could be mistaken.

v2 - Updated title and tags based on feedback.

Steve Rutherford (1):
  idpf: Fix header clobber in IDPF with SWIOTLB enabled

 drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

-- 
2.53.0.473.g4a7958ca14-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-02-27 20:34 [RFC PATCHv2 0/1] idpf: IDPF + SWIOTLB Bug Steve Rutherford
@ 2026-02-27 20:34 ` Steve Rutherford
  2026-03-02  7:17   ` [Intel-wired-lan] " Loktionov, Aleksandr
  2026-03-03 15:31   ` Alexander Lobakin
  0 siblings, 2 replies; 12+ messages in thread
From: Steve Rutherford @ 2026-02-27 20:34 UTC (permalink / raw)
  To: Tony Nguyen, Przemek Kitszel, aleksander.lobakin, David S. Miller,
	Jakub Kicinski, Eric Dumazet, intel-wired-lan
  Cc: netdev, linux-kernel, David Decotigny, Anjali Singhai,
	Sridhar Samudrala, Brian Vazquez, Li Li, emil.s.tantilov,
	Steve Rutherford

When SWIOTLB and header split are enabled, IDPF sees empty packets in the
rx queue.

This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
in the workaround (i.e. overflow) path. After the header is synthesized by
idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
effectively zeroing out the buffer.

This skips the extra sync in the workaround path in most cases. The one
exception is that it calls sync to trigger a recycle the header buffer when
it fails to find a header in the payload.

Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
Signed-off-by: Steve Rutherford <srutherford@google.com>
---
 drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 3ddf7b1e85ef..946203a6bd86 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
 			u64_stats_update_begin(&rxq->stats_sync);
 			u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
 			u64_stats_update_end(&rxq->stats_sync);
-		}
 
-		if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
+			/* Recycle the hdr buffer if unused.*/
+			if (!hdr_len)
+				libeth_rx_sync_for_cpu(hdr, 0);
+		} else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
+			hdr_len = 0;
+
+		if (hdr_len) {
 			skb = idpf_rx_build_skb(hdr, hdr_len);
 			if (!skb)
 				break;
-- 
2.53.0.473.g4a7958ca14-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* RE: [Intel-wired-lan] [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-02-27 20:34 ` [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled Steve Rutherford
@ 2026-03-02  7:17   ` Loktionov, Aleksandr
  2026-03-03 15:31   ` Alexander Lobakin
  1 sibling, 0 replies; 12+ messages in thread
From: Loktionov, Aleksandr @ 2026-03-02  7:17 UTC (permalink / raw)
  To: Steve Rutherford, Nguyen, Anthony L, Kitszel, Przemyslaw,
	Lobakin, Aleksander, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan@lists.osuosl.org
  Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	David Decotigny, Singhai, Anjali, Samudrala, Sridhar,
	Brian Vazquez, Li Li, Tantilov, Emil S



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Steve Rutherford
> Sent: Friday, February 27, 2026 9:35 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>; Kitszel,
> Przemyslaw <przemyslaw.kitszel@intel.com>; Lobakin, Aleksander
> <aleksander.lobakin@intel.com>; David S. Miller <davem@davemloft.net>;
> Jakub Kicinski <kuba@kernel.org>; Eric Dumazet <edumazet@google.com>;
> intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org; David
> Decotigny <decot@google.com>; Singhai, Anjali
> <anjali.singhai@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>; Brian Vazquez <brianvv@google.com>; Li
> Li <boolli@google.com>; Tantilov, Emil S <emil.s.tantilov@intel.com>;
> Steve Rutherford <srutherford@google.com>
> Subject: [Intel-wired-lan] [RFC PATCHv2 1/1] idpf: Fix header clobber
> in IDPF with SWIOTLB enabled
> 
> When SWIOTLB and header split are enabled, IDPF sees empty packets in
> the rx queue.
> 
> This is caused by libeth_rx_sync_for_cpu clobbering the synthesized
> header in the workaround (i.e. overflow) path. After the header is
> synthesized by idpf_rx_hsplit_wa, the sync call pulls from the empty
> SWIOTLB buffer, effectively zeroing out the buffer.
> 
> This skips the extra sync in the workaround path in most cases. The
> one exception is that it calls sync to trigger a recycle the header
> buffer when it fails to find a header in the payload.
> 
> Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth +
> napi_build_skb()")
> Signed-off-by: Steve Rutherford <srutherford@google.com>
> ---
>  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 3ddf7b1e85ef..946203a6bd86 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct
> idpf_rx_queue *rxq, int budget)
>  			u64_stats_update_begin(&rxq->stats_sync);
>  			u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
>  			u64_stats_update_end(&rxq->stats_sync);
> -		}
> 
> -		if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
> +			/* Recycle the hdr buffer if unused.*/
Just a nit - please add space before */

Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>

> +			if (!hdr_len)
> +				libeth_rx_sync_for_cpu(hdr, 0);
> +		} else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
> +			hdr_len = 0;
> +
> +		if (hdr_len) {
>  			skb = idpf_rx_build_skb(hdr, hdr_len);
>  			if (!skb)
>  				break;
> --
> 2.53.0.473.g4a7958ca14-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-02-27 20:34 ` [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled Steve Rutherford
  2026-03-02  7:17   ` [Intel-wired-lan] " Loktionov, Aleksandr
@ 2026-03-03 15:31   ` Alexander Lobakin
  2026-03-03 19:44     ` Steve Rutherford
  1 sibling, 1 reply; 12+ messages in thread
From: Alexander Lobakin @ 2026-03-03 15:31 UTC (permalink / raw)
  To: Steve Rutherford, Tony Nguyen, Przemek Kitszel, David S. Miller,
	Jakub Kicinski, Eric Dumazet, intel-wired-lan
  Cc: netdev, linux-kernel, David Decotigny, Anjali Singhai,
	Sridhar Samudrala, Brian Vazquez, Li Li, emil.s.tantilov

From: Steve Rutherford <srutherford@google.com>
Date: Fri, 27 Feb 2026 20:34:57 +0000

> When SWIOTLB and header split are enabled, IDPF sees empty packets in the
> rx queue.
> 
> This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
> in the workaround (i.e. overflow) path. After the header is synthesized by
> idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
> effectively zeroing out the buffer.
> 
> This skips the extra sync in the workaround path in most cases. The one
> exception is that it calls sync to trigger a recycle the header buffer when
> it fails to find a header in the payload.
> 
> Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
> Signed-off-by: Steve Rutherford <srutherford@google.com>
> ---
>  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 3ddf7b1e85ef..946203a6bd86 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>  			u64_stats_update_begin(&rxq->stats_sync);
>  			u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
>  			u64_stats_update_end(&rxq->stats_sync);
> -		}
>  
> -		if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
> +			/* Recycle the hdr buffer if unused.*/
> +			if (!hdr_len)
> +				libeth_rx_sync_for_cpu(hdr, 0);
> +		} else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
> +			hdr_len = 0;
> +
> +		if (hdr_len) {

This is for a very old tree I believe? We now have
libeth_xdp_process_buff() there for quite some time already.

>  			skb = idpf_rx_build_skb(hdr, hdr_len);
>  			if (!skb)
>  				break;

Thanks,
Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-03 15:31   ` Alexander Lobakin
@ 2026-03-03 19:44     ` Steve Rutherford
  2026-03-04 15:11       ` Alexander Lobakin
  0 siblings, 1 reply; 12+ messages in thread
From: Steve Rutherford @ 2026-03-03 19:44 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

On Tue, Mar 3, 2026 at 7:34 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Steve Rutherford <srutherford@google.com>
> Date: Fri, 27 Feb 2026 20:34:57 +0000
>
> > When SWIOTLB and header split are enabled, IDPF sees empty packets in the
> > rx queue.
> >
> > This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
> > in the workaround (i.e. overflow) path. After the header is synthesized by
> > idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
> > effectively zeroing out the buffer.
> >
> > This skips the extra sync in the workaround path in most cases. The one
> > exception is that it calls sync to trigger a recycle the header buffer when
> > it fails to find a header in the payload.
> >
> > Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
> > Signed-off-by: Steve Rutherford <srutherford@google.com>
> > ---
> >  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > index 3ddf7b1e85ef..946203a6bd86 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > @@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
> >                       u64_stats_update_begin(&rxq->stats_sync);
> >                       u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
> >                       u64_stats_update_end(&rxq->stats_sync);
> > -             }
> >
> > -             if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
> > +                     /* Recycle the hdr buffer if unused.*/
> > +                     if (!hdr_len)
> > +                             libeth_rx_sync_for_cpu(hdr, 0);
> > +             } else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
> > +                     hdr_len = 0;
> > +
> > +             if (hdr_len) {
>
> This is for a very old tree I believe? We now have
> libeth_xdp_process_buff() there for quite some time already.

It is, yeah. I thought I posted a cover letter with more of a description, but,
frankly, I may have messed up the process of posting.

From the cover letter -
Found an issue with the IDPF driver when SWIOTLB is enabled. The issue
results in empty headers for packets that hit the split queue workaround
path. It's caused by a spurious sync in that path. The header is synced
from the SWIOTLB even when the header was shoved into the payload.

I cooked up a sample patch, but I'm not an expert in this driver, so I have
no idea if it's the right solution. It did allow my QEMU VM to boot with a
superficially functional passed-through IDPF NIC and SWIOTLB=force.

The patch was written against COS's 6.12, so I assume that it will not
apply cleanly elsewhere, but I figured a wrong sample patch was better than
a long paragraph describing the same thing. My read of more recent kernels
is that this problem is still present, but could be mistaken.

Thanks,
Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-03 19:44     ` Steve Rutherford
@ 2026-03-04 15:11       ` Alexander Lobakin
  2026-03-04 22:01         ` Steve Rutherford
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Lobakin @ 2026-03-04 15:11 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

From: Steve Rutherford <srutherford@google.com>
Date: Tue, 3 Mar 2026 11:44:19 -0800

> On Tue, Mar 3, 2026 at 7:34 AM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:
>>
>> From: Steve Rutherford <srutherford@google.com>
>> Date: Fri, 27 Feb 2026 20:34:57 +0000
>>
>>> When SWIOTLB and header split are enabled, IDPF sees empty packets in the
>>> rx queue.
>>>
>>> This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
>>> in the workaround (i.e. overflow) path. After the header is synthesized by
>>> idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
>>> effectively zeroing out the buffer.
>>>
>>> This skips the extra sync in the workaround path in most cases. The one
>>> exception is that it calls sync to trigger a recycle the header buffer when
>>> it fails to find a header in the payload.
>>>
>>> Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
>>> Signed-off-by: Steve Rutherford <srutherford@google.com>
>>> ---
>>>  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
>>>  1 file changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> index 3ddf7b1e85ef..946203a6bd86 100644
>>> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> @@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>>>                       u64_stats_update_begin(&rxq->stats_sync);
>>>                       u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
>>>                       u64_stats_update_end(&rxq->stats_sync);
>>> -             }
>>>
>>> -             if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
>>> +                     /* Recycle the hdr buffer if unused.*/
>>> +                     if (!hdr_len)
>>> +                             libeth_rx_sync_for_cpu(hdr, 0);
>>> +             } else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
>>> +                     hdr_len = 0;
>>> +
>>> +             if (hdr_len) {
>>
>> This is for a very old tree I believe? We now have
>> libeth_xdp_process_buff() there for quite some time already.
> 
> It is, yeah. I thought I posted a cover letter with more of a description, but,
> frankly, I may have messed up the process of posting.
> 
> From the cover letter -
> Found an issue with the IDPF driver when SWIOTLB is enabled. The issue
> results in empty headers for packets that hit the split queue workaround
> path. It's caused by a spurious sync in that path. The header is synced
> from the SWIOTLB even when the header was shoved into the payload.
> 
> I cooked up a sample patch, but I'm not an expert in this driver, so I have
> no idea if it's the right solution. It did allow my QEMU VM to boot with a
> superficially functional passed-through IDPF NIC and SWIOTLB=force.
> 
> The patch was written against COS's 6.12, so I assume that it will not
> apply cleanly elsewhere, but I figured a wrong sample patch was better than
> a long paragraph describing the same thing. My read of more recent kernels
> is that this problem is still present, but could be mistaken.

Ooops, sorry, I haven't read the cover letter =\

Did I get it correctly that in case of SWIOTLB, we can't sync the same
buffer two times? But if the hsplit W/A was applied, then this double
sync corrupts the data?

I'll prepare a patch for the latest net (with you as Co-developed-by or
any other tag you prefer) once I find a way how to play this nicely with
libeth_xdp_process_buff(). It performs an unconditional sync and bails
out if it returned false.

> 
> Thanks,
> Steve

Thanks,
Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-04 15:11       ` Alexander Lobakin
@ 2026-03-04 22:01         ` Steve Rutherford
  2026-03-06 14:50           ` Alexander Lobakin
  0 siblings, 1 reply; 12+ messages in thread
From: Steve Rutherford @ 2026-03-04 22:01 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

I believe syncing twice isn't inherently wrong - it's more that you
can't synthesize the header via the workaround and then sync, since it
will pull the uninitialized header buffer from the SWIOTLB. Outside of
SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
are copies from/to the bounce buffers.

On Wed, Mar 4, 2026 at 7:13 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Steve Rutherford <srutherford@google.com>
> Date: Tue, 3 Mar 2026 11:44:19 -0800
>
> > On Tue, Mar 3, 2026 at 7:34 AM Alexander Lobakin
> > <aleksander.lobakin@intel.com> wrote:
> >>
> >> From: Steve Rutherford <srutherford@google.com>
> >> Date: Fri, 27 Feb 2026 20:34:57 +0000
> >>
> >>> When SWIOTLB and header split are enabled, IDPF sees empty packets in the
> >>> rx queue.
> >>>
> >>> This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
> >>> in the workaround (i.e. overflow) path. After the header is synthesized by
> >>> idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
> >>> effectively zeroing out the buffer.
> >>>
> >>> This skips the extra sync in the workaround path in most cases. The one
> >>> exception is that it calls sync to trigger a recycle the header buffer when
> >>> it fails to find a header in the payload.
> >>>
> >>> Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
> >>> Signed-off-by: Steve Rutherford <srutherford@google.com>
> >>> ---
> >>>  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 9 +++++++--
> >>>  1 file changed, 7 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> >>> index 3ddf7b1e85ef..946203a6bd86 100644
> >>> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> >>> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> >>> @@ -3007,9 +3007,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
> >>>                       u64_stats_update_begin(&rxq->stats_sync);
> >>>                       u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
> >>>                       u64_stats_update_end(&rxq->stats_sync);
> >>> -             }
> >>>
> >>> -             if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
> >>> +                     /* Recycle the hdr buffer if unused.*/
> >>> +                     if (!hdr_len)
> >>> +                             libeth_rx_sync_for_cpu(hdr, 0);
> >>> +             } else if (!libeth_rx_sync_for_cpu(hdr, hdr_len))
> >>> +                     hdr_len = 0;
> >>> +
> >>> +             if (hdr_len) {
> >>
> >> This is for a very old tree I believe? We now have
> >> libeth_xdp_process_buff() there for quite some time already.
> >
> > It is, yeah. I thought I posted a cover letter with more of a description, but,
> > frankly, I may have messed up the process of posting.
> >
> > From the cover letter -
> > Found an issue with the IDPF driver when SWIOTLB is enabled. The issue
> > results in empty headers for packets that hit the split queue workaround
> > path. It's caused by a spurious sync in that path. The header is synced
> > from the SWIOTLB even when the header was shoved into the payload.
> >
> > I cooked up a sample patch, but I'm not an expert in this driver, so I have
> > no idea if it's the right solution. It did allow my QEMU VM to boot with a
> > superficially functional passed-through IDPF NIC and SWIOTLB=force.
> >
> > The patch was written against COS's 6.12, so I assume that it will not
> > apply cleanly elsewhere, but I figured a wrong sample patch was better than
> > a long paragraph describing the same thing. My read of more recent kernels
> > is that this problem is still present, but could be mistaken.
>
> Ooops, sorry, I haven't read the cover letter =\
>
> Did I get it correctly that in case of SWIOTLB, we can't sync the same
> buffer two times? But if the hsplit W/A was applied, then this double
> sync corrupts the data?
>
> I'll prepare a patch for the latest net (with you as Co-developed-by or
> any other tag you prefer) once I find a way how to play this nicely with
> libeth_xdp_process_buff(). It performs an unconditional sync and bails
> out if it returned false.
>
> >
> > Thanks,
> > Steve
>
> Thanks,
> Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-04 22:01         ` Steve Rutherford
@ 2026-03-06 14:50           ` Alexander Lobakin
  2026-03-06 19:35             ` Steve Rutherford
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Lobakin @ 2026-03-06 14:50 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

From: Steve Rutherford <srutherford@google.com>
Date: Wed, 4 Mar 2026 14:01:46 -0800

> I believe syncing twice isn't inherently wrong - it's more that you
> can't synthesize the header via the workaround and then sync, since it
> will pull the uninitialized header buffer from the SWIOTLB. Outside of
> SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
> are copies from/to the bounce buffers.

Ah I see.

What if I add sync_for_device after copying the header? This should
synchronize the bounce buffer with the copied data I guess? A bit of
overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
L4 protos are fortunately not affected.

> 
> On Wed, Mar 4, 2026 at 7:13 AM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:
>>
>> From: Steve Rutherford <srutherford@google.com>
>> Date: Tue, 3 Mar 2026 11:44:19 -0800
>>
>>> On Tue, Mar 3, 2026 at 7:34 AM Alexander Lobakin
>>> <aleksander.lobakin@intel.com> wrote:
>>>>
>>>> From: Steve Rutherford <srutherford@google.com>
>>>> Date: Fri, 27 Feb 2026 20:34:57 +0000
>>>>
>>>>> When SWIOTLB and header split are enabled, IDPF sees empty packets in the
>>>>> rx queue.
>>>>>
>>>>> This is caused by libeth_rx_sync_for_cpu clobbering the synthesized header
>>>>> in the workaround (i.e. overflow) path. After the header is synthesized by
>>>>> idpf_rx_hsplit_wa, the sync call pulls from the empty SWIOTLB buffer,
>>>>> effectively zeroing out the buffer.
>>>>>
>>>>> This skips the extra sync in the workaround path in most cases. The one
>>>>> exception is that it calls sync to trigger a recycle the header buffer when
>>>>> it fails to find a header in the payload.
>>>>>
>>>>> Fixes: 90912f9f4f2d1 ("idpf: convert header split mode to libeth + napi_build_skb()")
>>>>> Signed-off-by: Steve Rutherford <srutherford@google.com>
Thanks,
Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-06 14:50           ` Alexander Lobakin
@ 2026-03-06 19:35             ` Steve Rutherford
  2026-03-12 16:30               ` Alexander Lobakin
  0 siblings, 1 reply; 12+ messages in thread
From: Steve Rutherford @ 2026-03-06 19:35 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

On Fri, Mar 6, 2026 at 6:52 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Steve Rutherford <srutherford@google.com>
> Date: Wed, 4 Mar 2026 14:01:46 -0800
>
> > I believe syncing twice isn't inherently wrong - it's more that you
> > can't synthesize the header via the workaround and then sync, since it
> > will pull the uninitialized header buffer from the SWIOTLB. Outside of
> > SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
> > are copies from/to the bounce buffers.
>
> Ah I see.
>
> What if I add sync_for_device after copying the header? This should
> synchronize the bounce buffer with the copied data I guess? A bit of
> overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
> L4 protos are fortunately not affected.

That should work fine as well. I'm not certain I have strong
preferences on the right answer here, other than "does it work and,
ideally, is it less confusing?" The patch I posted is a bit
unintuitive. I think what you are describing might make the workaround
self-contained.

thanks,
Steve
 [And sorry for my gmail-driven top posting crimes D: ]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-06 19:35             ` Steve Rutherford
@ 2026-03-12 16:30               ` Alexander Lobakin
  2026-03-23 13:31                 ` Alexander Lobakin
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Lobakin @ 2026-03-12 16:30 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Alexander Lobakin, Tony Nguyen, Przemek Kitszel, David S. Miller,
	Jakub Kicinski, Eric Dumazet, intel-wired-lan, netdev,
	linux-kernel, David Decotigny, Anjali Singhai, Sridhar Samudrala,
	Brian Vazquez, Li Li, emil.s.tantilov

Hey,

From: Steve Rutherford via Intel-wired-lan <intel-wired-lan@osuosl.org>
Date: Fri, 6 Mar 2026 11:35:27 -0800

> On Fri, Mar 6, 2026 at 6:52=E2=80=AFAM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:
> >
> > From: Steve Rutherford <srutherford@google.com>
> > Date: Wed, 4 Mar 2026 14:01:46 -0800
> >
> > > I believe syncing twice isn't inherently wrong - it's more that you
> > > can't synthesize the header via the workaround and then sync, since it
> > > will pull the uninitialized header buffer from the SWIOTLB. Outside of
> > > SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
> > > are copies from/to the bounce buffers.
> >
> > Ah I see.
> >
> > What if I add sync_for_device after copying the header? This should
> > synchronize the bounce buffer with the copied data I guess? A bit of
> > overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
> > L4 protos are fortunately not affected.
> 
> That should work fine as well. I'm not certain I have strong
> preferences on the right answer here, other than "does it work and,
> ideally, is it less confusing?" The patch I posted is a bit
> unintuitive. I think what you are describing might make the workaround
> self-contained.

Could you please test this patch with SWIOTLB? If it doesn't fix
the issue, you can try changing `page_pool_get_dma_dir(hdr_pp)`
to `DMA_TO_DEVICE` and/or `DMA_BIDIRECTIONAL`.
Currently, I don't have any machines with SWIOTLB unfortunately =\
Let me know if any of these works. I'll submit it properly when we
have a solution.

(the patch applies cleanly to the latest net-next and should apply
 to a couple older kernel releases as well)

> 
> thanks,
> Steve
>  [And sorry for my gmail-driven top posting crimes D: ]

Thanks,
Olek
---
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 45ee5b80479a..42111d56d66f 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -3475,7 +3475,8 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
 			     struct libeth_fqe *buf, u32 data_len)
 {
 	u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
-	struct page *hdr_page, *buf_page;
+	const struct page_pool *hdr_pp;
+	dma_addr_t hdr_addr;
 	const void *src;
 	void *dst;
 
@@ -3483,16 +3484,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
 	    !libeth_rx_sync_for_cpu(buf, copy))
 		return 0;
 
-	hdr_page = __netmem_to_page(hdr->netmem);
-	buf_page = __netmem_to_page(buf->netmem);
-	dst = page_address(hdr_page) + hdr->offset +
-		pp_page_to_nmdesc(hdr_page)->pp->p.offset;
-	src = page_address(buf_page) + buf->offset +
-		pp_page_to_nmdesc(buf_page)->pp->p.offset;
+	hdr_pp = __netmem_get_pp(hdr->netmem);
+	dst = __netmem_address(hdr->netmem) + hdr->offset + hdr_pp->p.offset;
+	src = __netmem_address(buf->netmem) + buf->offset +
+	      __netmem_get_pp(buf->netmem)->p.offset;
 
 	memcpy(dst, src, LARGEST_ALIGN(copy));
 	buf->offset += copy;
 
+	/* Make sure SWIOTLB is synced */
+	hdr_addr = page_pool_get_dma_addr_netmem(hdr->netmem);
+	dma_sync_single_range_for_device(hdr_pp->p.dev, hdr_addr,
+					 hdr->offset + hdr_pp->p.offset,
+					 copy, page_pool_get_dma_dir(hdr_pp));
+
 	return copy;
 }
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-12 16:30               ` Alexander Lobakin
@ 2026-03-23 13:31                 ` Alexander Lobakin
  2026-03-25  0:44                   ` Steve Rutherford
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Lobakin @ 2026-03-23 13:31 UTC (permalink / raw)
  To: Steve Rutherford
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Thu, 12 Mar 2026 17:30:24 +0100

> Hey,
> 
> From: Steve Rutherford via Intel-wired-lan <intel-wired-lan@osuosl.org>
> Date: Fri, 6 Mar 2026 11:35:27 -0800
> 
>> On Fri, Mar 6, 2026 at 6:52=E2=80=AFAM Alexander Lobakin
>> <aleksander.lobakin@intel.com> wrote:
>>>
>>> From: Steve Rutherford <srutherford@google.com>
>>> Date: Wed, 4 Mar 2026 14:01:46 -0800
>>>
>>>> I believe syncing twice isn't inherently wrong - it's more that you
>>>> can't synthesize the header via the workaround and then sync, since it
>>>> will pull the uninitialized header buffer from the SWIOTLB. Outside of
>>>> SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
>>>> are copies from/to the bounce buffers.
>>>
>>> Ah I see.
>>>
>>> What if I add sync_for_device after copying the header? This should
>>> synchronize the bounce buffer with the copied data I guess? A bit of
>>> overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
>>> L4 protos are fortunately not affected.
>>
>> That should work fine as well. I'm not certain I have strong
>> preferences on the right answer here, other than "does it work and,
>> ideally, is it less confusing?" The patch I posted is a bit
>> unintuitive. I think what you are describing might make the workaround
>> self-contained.
> 
> Could you please test this patch with SWIOTLB? If it doesn't fix
> the issue, you can try changing `page_pool_get_dma_dir(hdr_pp)`
> to `DMA_TO_DEVICE` and/or `DMA_BIDIRECTIONAL`.
> Currently, I don't have any machines with SWIOTLB unfortunately =\
> Let me know if any of these works. I'll submit it properly when we
> have a solution.

Any updates? I need your Tested-by in order to send this.

> 
> (the patch applies cleanly to the latest net-next and should apply
>  to a couple older kernel releases as well)
> 
>>
>> thanks,
>> Steve
>>  [And sorry for my gmail-driven top posting crimes D: ]
> 
> Thanks,
> Olek
> ---
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 45ee5b80479a..42111d56d66f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -3475,7 +3475,8 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>  			     struct libeth_fqe *buf, u32 data_len)
>  {
>  	u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
> -	struct page *hdr_page, *buf_page;
> +	const struct page_pool *hdr_pp;
> +	dma_addr_t hdr_addr;
>  	const void *src;
>  	void *dst;
>  
> @@ -3483,16 +3484,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>  	    !libeth_rx_sync_for_cpu(buf, copy))
>  		return 0;
>  
> -	hdr_page = __netmem_to_page(hdr->netmem);
> -	buf_page = __netmem_to_page(buf->netmem);
> -	dst = page_address(hdr_page) + hdr->offset +
> -		pp_page_to_nmdesc(hdr_page)->pp->p.offset;
> -	src = page_address(buf_page) + buf->offset +
> -		pp_page_to_nmdesc(buf_page)->pp->p.offset;
> +	hdr_pp = __netmem_get_pp(hdr->netmem);
> +	dst = __netmem_address(hdr->netmem) + hdr->offset + hdr_pp->p.offset;
> +	src = __netmem_address(buf->netmem) + buf->offset +
> +	      __netmem_get_pp(buf->netmem)->p.offset;
>  
>  	memcpy(dst, src, LARGEST_ALIGN(copy));
>  	buf->offset += copy;
>  
> +	/* Make sure SWIOTLB is synced */
> +	hdr_addr = page_pool_get_dma_addr_netmem(hdr->netmem);
> +	dma_sync_single_range_for_device(hdr_pp->p.dev, hdr_addr,
> +					 hdr->offset + hdr_pp->p.offset,
> +					 copy, page_pool_get_dma_dir(hdr_pp));
> +
>  	return copy;
>  }

Thanks,
Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled
  2026-03-23 13:31                 ` Alexander Lobakin
@ 2026-03-25  0:44                   ` Steve Rutherford
  0 siblings, 0 replies; 12+ messages in thread
From: Steve Rutherford @ 2026-03-25  0:44 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: Tony Nguyen, Przemek Kitszel, David S. Miller, Jakub Kicinski,
	Eric Dumazet, intel-wired-lan, netdev, linux-kernel,
	David Decotigny, Anjali Singhai, Sridhar Samudrala, Brian Vazquez,
	Li Li, emil.s.tantilov

On Mon, Mar 23, 2026 at 6:33 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Alexander Lobakin <aleksander.lobakin@intel.com>
> Date: Thu, 12 Mar 2026 17:30:24 +0100
>
> > Hey,
> >
> > From: Steve Rutherford via Intel-wired-lan <intel-wired-lan@osuosl.org>
> > Date: Fri, 6 Mar 2026 11:35:27 -0800
> >
> >> On Fri, Mar 6, 2026 at 6:52=E2=80=AFAM Alexander Lobakin
> >> <aleksander.lobakin@intel.com> wrote:
> >>>
> >>> From: Steve Rutherford <srutherford@google.com>
> >>> Date: Wed, 4 Mar 2026 14:01:46 -0800
> >>>
> >>>> I believe syncing twice isn't inherently wrong - it's more that you
> >>>> can't synthesize the header via the workaround and then sync, since it
> >>>> will pull the uninitialized header buffer from the SWIOTLB. Outside of
> >>>> SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
> >>>> are copies from/to the bounce buffers.
> >>>
> >>> Ah I see.
> >>>
> >>> What if I add sync_for_device after copying the header? This should
> >>> synchronize the bounce buffer with the copied data I guess? A bit of
> >>> overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
> >>> L4 protos are fortunately not affected.
> >>
> >> That should work fine as well. I'm not certain I have strong
> >> preferences on the right answer here, other than "does it work and,
> >> ideally, is it less confusing?" The patch I posted is a bit
> >> unintuitive. I think what you are describing might make the workaround
> >> self-contained.
> >
> > Could you please test this patch with SWIOTLB? If it doesn't fix
> > the issue, you can try changing `page_pool_get_dma_dir(hdr_pp)`
> > to `DMA_TO_DEVICE` and/or `DMA_BIDIRECTIONAL`.
> > Currently, I don't have any machines with SWIOTLB unfortunately =\
> > Let me know if any of these works. I'll submit it properly when we
> > have a solution.
>
> Any updates? I need your Tested-by in order to send this.

Sorry for the delay, tried to reproduce this against a 6.18 kernel and
ran into environment-specific issues with 6.18. I'll take another stab
sometime this week.

thanks,
Steve
>
> >
> > (the patch applies cleanly to the latest net-next and should apply
> >  to a couple older kernel releases as well)
> >
> >>
> >> thanks,
> >> Steve
> >>  [And sorry for my gmail-driven top posting crimes D: ]
> >
> > Thanks,
> > Olek
> > ---
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > index 45ee5b80479a..42111d56d66f 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > @@ -3475,7 +3475,8 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
> >                            struct libeth_fqe *buf, u32 data_len)
> >  {
> >       u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
> > -     struct page *hdr_page, *buf_page;
> > +     const struct page_pool *hdr_pp;
> > +     dma_addr_t hdr_addr;
> >       const void *src;
> >       void *dst;
> >
> > @@ -3483,16 +3484,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
> >           !libeth_rx_sync_for_cpu(buf, copy))
> >               return 0;
> >
> > -     hdr_page = __netmem_to_page(hdr->netmem);
> > -     buf_page = __netmem_to_page(buf->netmem);
> > -     dst = page_address(hdr_page) + hdr->offset +
> > -             pp_page_to_nmdesc(hdr_page)->pp->p.offset;
> > -     src = page_address(buf_page) + buf->offset +
> > -             pp_page_to_nmdesc(buf_page)->pp->p.offset;
> > +     hdr_pp = __netmem_get_pp(hdr->netmem);
> > +     dst = __netmem_address(hdr->netmem) + hdr->offset + hdr_pp->p.offset;
> > +     src = __netmem_address(buf->netmem) + buf->offset +
> > +           __netmem_get_pp(buf->netmem)->p.offset;
> >
> >       memcpy(dst, src, LARGEST_ALIGN(copy));
> >       buf->offset += copy;
> >
> > +     /* Make sure SWIOTLB is synced */
> > +     hdr_addr = page_pool_get_dma_addr_netmem(hdr->netmem);
> > +     dma_sync_single_range_for_device(hdr_pp->p.dev, hdr_addr,
> > +                                      hdr->offset + hdr_pp->p.offset,
> > +                                      copy, page_pool_get_dma_dir(hdr_pp));
> > +
> >       return copy;
> >  }
>
> Thanks,
> Olek

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-03-25  0:45 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-27 20:34 [RFC PATCHv2 0/1] idpf: IDPF + SWIOTLB Bug Steve Rutherford
2026-02-27 20:34 ` [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled Steve Rutherford
2026-03-02  7:17   ` [Intel-wired-lan] " Loktionov, Aleksandr
2026-03-03 15:31   ` Alexander Lobakin
2026-03-03 19:44     ` Steve Rutherford
2026-03-04 15:11       ` Alexander Lobakin
2026-03-04 22:01         ` Steve Rutherford
2026-03-06 14:50           ` Alexander Lobakin
2026-03-06 19:35             ` Steve Rutherford
2026-03-12 16:30               ` Alexander Lobakin
2026-03-23 13:31                 ` Alexander Lobakin
2026-03-25  0:44                   ` Steve Rutherford

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox