DPDK-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v5 1/1] examples/l2fwd-jobstats: fix lock availability
       [not found] ` <20240811155957.576645-1-rkudurumalla@marvell.com>
@ 2025-05-24 15:39   ` Stephen Hemminger
  2026-01-14  6:24   ` Stephen Hemminger
  2026-03-02  6:18   ` [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention rkudurumalla
  2 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2025-05-24 15:39 UTC (permalink / raw)
  To: Rakesh Kudurumalla
  Cc: ferruh.yigit, andrew.rybchenko, orika, thomas, dev, jerinj,
	ndabilpuram, stable

On Sun, 11 Aug 2024 21:29:57 +0530
Rakesh Kudurumalla <rkudurumalla@marvell.com> wrote:

> Race condition between jobstats and time metrics
> for forwarding and flushing is maintained using spinlock.
> Timer metrics are not displayed properly due to the
> frequent unavailability of the lock.This patch fixes
> the issue by introducing a delay before acquiring
> the lock in the loop. This delay allows for betteravailability
> of the lock, ensuring that show_lcore_stats() can
> periodically update the statistics even when forwarding
> jobs are running.
> 
> Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>

The original code is a mess here.
The whole idle job loop here is the problem.
It should use rte_timer_next_ticks() to know when next timer is
about to expire and use that.

And instead of fighting with spin lock, use ticket lock which will
cause in-order waiting.

Something like the following (untested):

diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 308b8edd20..9586d90ab6 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -27,7 +27,7 @@
 #include <rte_ethdev.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
-#include <rte_spinlock.h>
+#include <rte_ticketlock.h>
 
 #include <rte_errno.h>
 #include <rte_jobstats.h>
@@ -80,8 +80,7 @@ struct __rte_cache_aligned lcore_queue_conf {
 	struct rte_jobstats idle_job;
 	struct rte_jobstats_context jobs_context;
 
-	RTE_ATOMIC(uint16_t) stats_read_pending;
-	rte_spinlock_t lock;
+	rte_ticketlock_t lock;
 };
 /* >8 End of list of queues to be polled for given lcore. */
 struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
@@ -151,9 +150,7 @@ show_lcore_stats(unsigned lcore_id)
 	uint64_t collection_time = rte_get_timer_cycles();
 
 	/* Ask forwarding thread to give us stats. */
-	rte_atomic_store_explicit(&qconf->stats_read_pending, 1, rte_memory_order_relaxed);
-	rte_spinlock_lock(&qconf->lock);
-	rte_atomic_store_explicit(&qconf->stats_read_pending, 0, rte_memory_order_relaxed);
+	rte_ticketlock_lock(&qconf->lock);
 
 	/* Collect context statistics. */
 	stats_period = ctx->state_time - ctx->start_time;
@@ -195,7 +192,7 @@ show_lcore_stats(unsigned lcore_id)
 	idle_exec_max = qconf->idle_job.max_exec_time;
 	rte_jobstats_reset(&qconf->idle_job);
 
-	rte_spinlock_unlock(&qconf->lock);
+	rte_ticketlock_unlock(&qconf->lock);
 
 	exec -= idle_exec;
 	busy = exec + management;
@@ -478,11 +475,11 @@ l2fwd_main_loop(void)
 	unsigned lcore_id;
 	unsigned i, portid;
 	struct lcore_queue_conf *qconf;
-	uint8_t stats_read_pending = 0;
-	uint8_t need_manage;
+	uint64_t hz;
 
 	lcore_id = rte_lcore_id();
 	qconf = &lcore_queue_conf[lcore_id];
+	hz = rte_get_timer_hz();
 
 	if (qconf->n_rx_port == 0) {
 		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
@@ -502,47 +499,22 @@ l2fwd_main_loop(void)
 
 	/* Minimize impact of stats reading. 8< */
 	for (;;) {
-		rte_spinlock_lock(&qconf->lock);
-
-		do {
-			rte_jobstats_context_start(&qconf->jobs_context);
-
-			/* Do the Idle job:
-			 * - Read stats_read_pending flag
-			 * - check if some real job need to be executed
-			 */
-			rte_jobstats_start(&qconf->jobs_context, &qconf->idle_job);
-
-			uint64_t repeats = 0;
-
-			do {
-				uint8_t i;
-				uint64_t now = rte_get_timer_cycles();
-
-				repeats++;
-				need_manage = qconf->flush_timer.expire < now;
-				/* Check if we was esked to give a stats. */
-				stats_read_pending = rte_atomic_load_explicit(
-					&qconf->stats_read_pending,
-					rte_memory_order_relaxed);
-				need_manage |= stats_read_pending;
 
-				for (i = 0; i < qconf->n_rx_port && !need_manage; i++)
-					need_manage = qconf->rx_timers[i].expire < now;
+		rte_ticketlock_lock(&qconf->lock);
 
-			} while (!need_manage);
+		rte_jobstats_context_start(&qconf->jobs_context);
+		rte_jobstats_start(&qconf->jobs_context, &qconf->idle_job);
+		rte_timer_manage();
+		rte_jobstats_context_finish(&qconf->jobs_context);
 
-			if (likely(repeats != 1))
-				rte_jobstats_finish(&qconf->idle_job, qconf->idle_job.target);
-			else
-				rte_jobstats_abort(&qconf->idle_job);
+		int64_t next_ticks = rte_timer_next_ticks();
 
-			rte_timer_manage();
-			rte_jobstats_context_finish(&qconf->jobs_context);
-		} while (likely(stats_read_pending == 0));
+		rte_ticketlock_unlock(&qconf->lock);
 
-		rte_spinlock_unlock(&qconf->lock);
-		rte_pause();
+		if (next_ticks > 0)
+			rte_delay_us((1000000 * next_ticks) / hz);
+		else
+			rte_pause();
 	}
 	/* >8 End of minimize impact of stats reading. */
 }
@@ -972,7 +944,7 @@ main(int argc, char **argv)
 	RTE_LCORE_FOREACH(lcore_id) {
 		qconf = &lcore_queue_conf[lcore_id];
 
-		rte_spinlock_init(&qconf->lock);
+		rte_ticketlock_init(&qconf->lock);
 
 		if (rte_jobstats_context_init(&qconf->jobs_context) != 0)
 			rte_panic("Jobs stats context for core %u init failed\n", lcore_id);


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 1/1] examples/l2fwd-jobstats: fix lock availability
       [not found] ` <20240811155957.576645-1-rkudurumalla@marvell.com>
  2025-05-24 15:39   ` [PATCH v5 1/1] examples/l2fwd-jobstats: fix lock availability Stephen Hemminger
@ 2026-01-14  6:24   ` Stephen Hemminger
  2026-03-02  6:18   ` [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention rkudurumalla
  2 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2026-01-14  6:24 UTC (permalink / raw)
  To: Rakesh Kudurumalla
  Cc: ferruh.yigit, andrew.rybchenko, orika, thomas, dev, jerinj,
	ndabilpuram, stable

On Sun, 11 Aug 2024 21:29:57 +0530
Rakesh Kudurumalla <rkudurumalla@marvell.com> wrote:

> Race condition between jobstats and time metrics
> for forwarding and flushing is maintained using spinlock.
> Timer metrics are not displayed properly due to the
> frequent unavailability of the lock.This patch fixes
> the issue by introducing a delay before acquiring
> the lock in the loop. This delay allows for betteravailability
> of the lock, ensuring that show_lcore_stats() can
> periodically update the statistics even when forwarding
> jobs are running.
> 
> Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> ---

I prodded AI to review this patch and it had some good observations.
You could go farther with rte_wait_until_equal which has even better power performance.

Let me see the critical `l2fwd_main_loop` function to understand the full locking context:Now I have a complete picture. Let me provide my review.

---

## Patch Review: examples/l2fwd-jobstats: fix lock availability

### Commit Message Issues

**Errors:**
- Line 84: Missing space before "betteravailability" → should be "better availability"
- Line 82: Missing space after period: "lock.This" → should be "lock. This"

**Style (minor):** The commit message is a bit verbose for what it's doing.

### Technical Analysis

The patch addresses a real problem. Looking at the code flow:

```c
/* Main loop holds lock almost continuously */
for (;;) {
    rte_spinlock_lock(&qconf->lock);      // Acquire
    
    do {
        /* ... lots of work ... */
        stats_read_pending = rte_atomic_load_explicit(...);
    } while (likely(stats_read_pending == 0));
    
    rte_spinlock_unlock(&qconf->lock);    // Release
    rte_pause();                          // <-- This is too short!
    // Immediately re-acquires lock...
}
```

Meanwhile, `show_lcore_stats()` sets the flag and waits on the lock:
```c
rte_atomic_store_explicit(&qconf->stats_read_pending, 1, ...);
rte_spinlock_lock(&qconf->lock);  // Starved - main loop re-acquires too fast
```

**The fix works**, but it's suboptimal:
- The 10µs delay is an arbitrary magic number
- It delays **unconditionally**, even when no stats reader is waiting
- Wastes cycles when the system is idle

### Better Option: Wait for stats completion

The `stats_read_pending` flag already exists to coordinate this handshake but isn't used fully. The cleaner fix is to make the main loop wait until the stats reader has finished:

```c
rte_spinlock_unlock(&qconf->lock);
/* Wait for stats reader to complete before reacquiring lock */
while (rte_atomic_load_explicit(&qconf->stats_read_pending, 
                                 rte_memory_order_relaxed) != 0)
    rte_pause();
```

This approach:
- Uses the existing synchronization mechanism as intended
- Only yields when stats are actually being read
- Has zero overhead when no stats read is pending
- Doesn't introduce arbitrary magic constants
- Guarantees the stats reader gets the lock (not probabilistic)

### Alternative: Conditional delay

If you want to keep the delay approach, at least make it conditional:

```c
rte_spinlock_unlock(&qconf->lock);
if (stats_read_pending)
    rte_delay_us(10);
else
    rte_pause();
```

This avoids the unconditional 10µs penalty on every iteration.

### Verdict

**NAK as-is.** The fix works but the better solution is to properly use the existing `stats_read_pending` handshake. Would suggest the author revise to use the wait-for-completion pattern instead of an arbitrary delay.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
       [not found] ` <20240811155957.576645-1-rkudurumalla@marvell.com>
  2025-05-24 15:39   ` [PATCH v5 1/1] examples/l2fwd-jobstats: fix lock availability Stephen Hemminger
  2026-01-14  6:24   ` Stephen Hemminger
@ 2026-03-02  6:18   ` rkudurumalla
  2026-03-02 15:48     ` Stephen Hemminger
  2 siblings, 1 reply; 8+ messages in thread
From: rkudurumalla @ 2026-03-02  6:18 UTC (permalink / raw)
  To: Pawel Wodkowski, Pablo de Lara
  Cc: dev, jerinj, ndabilpuram, Rakesh Kudurumalla, stable

From: Rakesh Kudurumalla <rkudurumalla@marvell.com>

Race condition between jobstats and time metrics
for forwarding and flushing is maintained using spinlock.
Timer metrics are not displayed properly due to the
frequent unavailability of the lock.

This patch fixes the issue by introducing a delay before
acquiring the lock in the loop. This delay allows for better
availability of the lock, ensuring that show_lcore_stats() can
periodically update the statistics even when forwarding
jobs are running.

Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
Cc: stable@dpdk.org

Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
 examples/l2fwd-jobstats/main.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index a7cd5b4840..036bd4f951 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -541,7 +541,9 @@ l2fwd_main_loop(void)
 		} while (likely(stats_read_pending == 0));
 
 		rte_spinlock_unlock(&qconf->lock);
-		rte_pause();
+		while (rte_atomic_load_explicit(&qconf->stats_read_pending,
+					rte_memory_order_relaxed) != 0)
+			rte_pause();
 	}
 	/* >8 End of minimize impact of stats reading. */
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
  2026-03-02  6:18   ` [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention rkudurumalla
@ 2026-03-02 15:48     ` Stephen Hemminger
  2026-03-16 13:36       ` [EXTERNAL] " Rakesh Kudurumalla
  0 siblings, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2026-03-02 15:48 UTC (permalink / raw)
  To: rkudurumalla
  Cc: Pawel Wodkowski, Pablo de Lara, dev, jerinj, ndabilpuram, stable

On Mon, 2 Mar 2026 11:48:05 +0530
rkudurumalla <rkudurumalla@marvell.com> wrote:

> From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> 
> Race condition between jobstats and time metrics
> for forwarding and flushing is maintained using spinlock.
> Timer metrics are not displayed properly due to the
> frequent unavailability of the lock.
> 
> This patch fixes the issue by introducing a delay before
> acquiring the lock in the loop. This delay allows for better
> availability of the lock, ensuring that show_lcore_stats() can
> periodically update the statistics even when forwarding
> jobs are running.
> 
> Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>

This does fix the problem, but this whole application seems like
it is using an unusual architecture. The main loop per thread
spends most of its time updating job stats and only periodically
calls the actual rx/tx burst forwarding. That is upside down??

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
  2026-03-02 15:48     ` Stephen Hemminger
@ 2026-03-16 13:36       ` Rakesh Kudurumalla
  2026-03-16 15:20         ` Stephen Hemminger
  2026-03-17 16:40         ` Thomas Monjalon
  0 siblings, 2 replies; 8+ messages in thread
From: Rakesh Kudurumalla @ 2026-03-16 13:36 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Pawel Wodkowski, Pablo de Lara, dev@dpdk.org, Jerin Jacob,
	Nithin Kumar Dabilpuram, stable@dpdk.org

Hi @Stephen Hemminger,

Yes, you're correct that the architecture looks unusual. But since the primary goal here is to stress-test the stats/timer subsystem under contention rather than packet forwarding, I believe the current structure serves its purpose well.

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday, March 2, 2026 9:18 PM
> To: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> Cc: Pawel Wodkowski <pawelwod@gmail.com>; Pablo de Lara
> <pablo.de.lara.guarch@intel.com>; dev@dpdk.org; Jerin Jacob
> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; stable@dpdk.org
> Subject: [EXTERNAL] Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer
> stats display with lock contention
> 
> On Mon, 2 Mar 2026 11: 48: 05 +0530 rkudurumalla
> <rkudurumalla@ marvell. com> wrote: > From: Rakesh Kudurumalla
> <rkudurumalla@ marvell. com> > > Race condition between jobstats and time
> metrics > for forwarding and flushing ZjQcmQRYFpfptBannerStart Prioritize
> security for external emails:
> Confirm sender and content safety before clicking links or opening
> attachments <https://us-phishalarm-
> ewt.proofpoint.com/EWT/v1/CRVmXkqW!tM3Z1f8UYnW69E-
> 8WVxabgLBnjenXs-
> Q7zZlpNFDtzEovrPgbLIXPV9yb_zhviTyHuvZsJVX17MSajtzju0guOTk$>
> Report Suspicious
> 
> ZjQcmQRYFpfptBannerEnd
> On Mon, 2 Mar 2026 11:48:05 +0530
> rkudurumalla <rkudurumalla@marvell.com> wrote:
> 
> > From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> >
> > Race condition between jobstats and time metrics for forwarding and
> > flushing is maintained using spinlock.
> > Timer metrics are not displayed properly due to the frequent
> > unavailability of the lock.
> >
> > This patch fixes the issue by introducing a delay before acquiring the
> > lock in the loop. This delay allows for better availability of the
> > lock, ensuring that show_lcore_stats() can periodically update the
> > statistics even when forwarding jobs are running.
> >
> > Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> 
> This does fix the problem, but this whole application seems like it is using an
> unusual architecture. The main loop per thread spends most of its time
> updating job stats and only periodically calls the actual rx/tx burst forwarding.
> That is upside down??

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
  2026-03-16 13:36       ` [EXTERNAL] " Rakesh Kudurumalla
@ 2026-03-16 15:20         ` Stephen Hemminger
  2026-03-17 16:39           ` Thomas Monjalon
  2026-03-17 16:40         ` Thomas Monjalon
  1 sibling, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2026-03-16 15:20 UTC (permalink / raw)
  To: Rakesh Kudurumalla
  Cc: Pawel Wodkowski, Pablo de Lara, dev@dpdk.org, Jerin Jacob,
	Nithin Kumar Dabilpuram, stable@dpdk.org

On Mon, 16 Mar 2026 13:36:30 +0000
Rakesh Kudurumalla <rkudurumalla@marvell.com> wrote:

> Hi @Stephen Hemminger,
> 
> Yes, you're correct that the architecture looks unusual. But since the primary goal here is to stress-test the stats/timer subsystem under contention rather than packet forwarding, I believe the current structure serves its purpose well.


But it also exposes lots of issues. I would argue that if it is a stress test it should be in tests/
not examples. Examples are intended as guidance for how users should use the API.

This is not the example that we should guide users towards.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
  2026-03-16 15:20         ` Stephen Hemminger
@ 2026-03-17 16:39           ` Thomas Monjalon
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2026-03-17 16:39 UTC (permalink / raw)
  To: Rakesh Kudurumalla, Stephen Hemminger
  Cc: Pawel Wodkowski, Pablo de Lara, dev@dpdk.org, Jerin Jacob,
	Nithin Kumar Dabilpuram, techboard

16/03/2026 16:20, Stephen Hemminger:
> On Mon, 16 Mar 2026 13:36:30 +0000
> Rakesh Kudurumalla <rkudurumalla@marvell.com> wrote:
> 
> > Hi @Stephen Hemminger,
> > 
> > Yes, you're correct that the architecture looks unusual. But since the primary goal here is to stress-test the stats/timer subsystem under contention rather than packet forwarding, I believe the current structure serves its purpose well.
> 
> 
> But it also exposes lots of issues. I would argue that if it is a stress test it should be in tests/
> not examples. Examples are intended as guidance for how users should use the API.
> 
> This is not the example that we should guide users towards.

I agree.
It could be discussed in the techboard.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention
  2026-03-16 13:36       ` [EXTERNAL] " Rakesh Kudurumalla
  2026-03-16 15:20         ` Stephen Hemminger
@ 2026-03-17 16:40         ` Thomas Monjalon
  1 sibling, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2026-03-17 16:40 UTC (permalink / raw)
  To: Rakesh Kudurumalla
  Cc: Stephen Hemminger, dev, Pawel Wodkowski, Pablo de Lara,
	dev@dpdk.org, Jerin Jacob, Nithin Kumar Dabilpuram,
	stable@dpdk.org

16/03/2026 14:36, Rakesh Kudurumalla:
> Hi @Stephen Hemminger,
> 
> Yes, you're correct that the architecture looks unusual. But since the primary goal here is to stress-test the stats/timer subsystem under contention rather than packet forwarding, I believe the current structure serves its purpose well.
> 
> From: Stephen Hemminger <stephen@networkplumber.org>
> > > From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> > >
> > > Race condition between jobstats and time metrics for forwarding and
> > > flushing is maintained using spinlock.
> > > Timer metrics are not displayed properly due to the frequent
> > > unavailability of the lock.
> > >
> > > This patch fixes the issue by introducing a delay before acquiring the
> > > lock in the loop. This delay allows for better availability of the
> > > lock, ensuring that show_lcore_stats() can periodically update the
> > > statistics even when forwarding jobs are running.
> > >
> > > Fixes: 204896f8d66c ("examples/l2fwd-jobstats: add new example")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> > 
> > This does fix the problem, but this whole application seems like it is using an
> > unusual architecture. The main loop per thread spends most of its time
> > updating job stats and only periodically calls the actual rx/tx burst forwarding.
> > That is upside down??

We can discuss the future of this example.
For now, the fix is applied, thanks.




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-17 16:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20240811065819.575759-1-rkudurumalla@marvell.com>
     [not found] ` <20240811155957.576645-1-rkudurumalla@marvell.com>
2025-05-24 15:39   ` [PATCH v5 1/1] examples/l2fwd-jobstats: fix lock availability Stephen Hemminger
2026-01-14  6:24   ` Stephen Hemminger
2026-03-02  6:18   ` [PATCH v6 1/1] examples/l2fwd-jobstats: fix timer stats display with lock contention rkudurumalla
2026-03-02 15:48     ` Stephen Hemminger
2026-03-16 13:36       ` [EXTERNAL] " Rakesh Kudurumalla
2026-03-16 15:20         ` Stephen Hemminger
2026-03-17 16:39           ` Thomas Monjalon
2026-03-17 16:40         ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox