linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] vmstat bug fixes for nohz_full CPUs
@ 2023-05-30 14:52 Marcelo Tosatti
  2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko

This patch series addresses the following two problems:

1. A customer provided evidence indicating that a process
   was stalled in direct reclaim:

 - The process was trapped in throttle_direct_reclaim().
   The function wait_event_killable() was called to wait condition
   allow_direct_reclaim(pgdat) for current node to be true.
   The allow_direct_reclaim(pgdat) examined the number of free pages
   on the node by zone_page_state() which just returns value in
   zone->vm_stat[NR_FREE_PAGES].

 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
   However, the freelist on this node was not empty.

 - This inconsistent of vmstat value was caused by percpu vmstat on
   nohz_full cpus. Every increment/decrement of vmstat is performed
   on percpu vmstat counter at first, then pooled diffs are cumulated
   to the zone's vmstat counter in timely manner. However, on nohz_full
   cpus (in case of this customer's system, 48 of 52 cpus) these pooled
   diffs were not cumulated once the cpu had no event on it so that
   the cpu started sleeping infinitely.
   I checked percpu vmstat and found there were total 69 counts not
   cumulated to the zone's vmstat counter yet.

 - In this situation, kswapd did not help the trapped process.
   In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
   of free pages on the node by zone_page_state_snapshot() which
   checks pending counts on percpu vmstat.
   Therefore kswapd could know there were 69 free pages correctly.
   Since zone->_watermark = {8, 20, 32}, kswapd did not work because
   69 was greater than 32 as high watermark.

 2. With a task that busy loops on a given CPU,
    the kworker interruption to execute vmstat_update
    is undesired and may exceed latency thresholds
    for certain applications.

First issue is solved by using _snapshot version of
the counters on allow_direct_reclaim.

Second issue is fixed by disabling periodic vmstat
updates for nohz_full CPUs.

Thanks to Aaron Tomlin for diagnosing issue 1 and writing
the initial patch series.

Performance details for the kworker interruption:

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
 
The example above shows an additional 7us for the

        oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold for certain applications.

 include/linux/workqueue.h |    1 +
 kernel/workqueue.c        |   50 ++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/vmscan.c               |    2 +-
 mm/vmstat.c               |   11 +++++++++++
 4 files changed, 63 insertions(+), 1 deletion(-)






^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot
  2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
  2023-06-02 10:53   ` Michal Hocko
  2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

A customer provided evidence indicating that a process
was stalled in direct reclaim:

 - The process was trapped in throttle_direct_reclaim().
   The function wait_event_killable() was called to wait condition     
   allow_direct_reclaim(pgdat) for current node to be true.     
   The allow_direct_reclaim(pgdat) examined the number of free pages     
   on the node by zone_page_state() which just returns value in     
   zone->vm_stat[NR_FREE_PAGES].     
                                                
 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
   However, the freelist on this node was not empty.     
                           
 - This inconsistent of vmstat value was caused by percpu vmstat on     
   nohz_full cpus. Every increment/decrement of vmstat is performed     
   on percpu vmstat counter at first, then pooled diffs are cumulated     
   to the zone's vmstat counter in timely manner. However, on nohz_full     
   cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
   diffs were not cumulated once the cpu had no event on it so that     
   the cpu started sleeping infinitely.                       
   I checked percpu vmstat and found there were total 69 counts not         
   cumulated to the zone's vmstat counter yet.     
                                         
 - In this situation, kswapd did not help the trapped process.     
   In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
   of free pages on the node by zone_page_state_snapshot() which     
   checks pending counts on percpu vmstat.     
   Therefore kswapd could know there were 69 free pages correctly.     
   Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
   69 was greater than 32 as high watermark.     

Change allow_direct_reclaim to use zone_page_state_snapshot, which
allows a more precise version of the vmstat counters to be used.

allow_direct_reclaim will only be called from try_to_free_pages,
which is not a hot path.

Testing: Due to difficulties accessing the system, it has not been
possible for the reproducer to test the patch (however its
clear from available data and analysis that it should fix it).

Reviewed-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Index: linux-vmstat-remote/mm/vmscan.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmscan.c
+++ linux-vmstat-remote/mm/vmscan.c
@@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data
 			continue;
 
 		pfmemalloc_reserve += min_wmark_pages(zone);
-		free_pages += zone_page_state(zone, NR_FREE_PAGES);
+		free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
 	}
 
 	/* If there are no reserves (unexpected config) then do not throttle */



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
  2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
  2023-06-02 10:39   ` Michal Hocko
  2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
  2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
  3 siblings, 1 reply; 14+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

The interruption caused by vmstat_update is undesirable 
for certain aplications:

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...

The example above shows an additional 7us for the

       	oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update  
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.

Skip periodic updates for nohz full CPUs. Any callers who
need precise values should use a snapshot of the per-CPU
counters, or use the global counters with measures to 
handle errors up to thresholds (see calculate_normal_threshold).

Suggested by Michal Hocko.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -28,6 +28,7 @@
 #include <linux/mm_inline.h>
 #include <linux/page_ext.h>
 #include <linux/page_owner.h>
+#include <linux/tick.h>
 
 #include "internal.h"
 
@@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
 	for_each_online_cpu(cpu) {
 		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
 
+		/*
+		 * Skip periodic updates for nohz full CPUs.
+		 * Any callers who need precise values should use
+		 * a snapshot of the per-CPU counters, or use the global
+		 * counters with measures to handle errors up to
+		 * thresholds (see calculate_normal_threshold).
+		 */
+		if (tick_nohz_full_cpu(cpu))
+			continue;
+
 		if (!delayed_work_pending(dw) && need_update(cpu))
 			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
  2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
  2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
  2023-05-30 20:09   ` Andrew Morton
  2023-06-02 10:48   ` Michal Hocko
  2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
  3 siblings, 2 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

Add a schedule_on_each_cpumask function, equivalent to
schedule_on_each_cpu but accepting a cpumask to operate.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Index: linux-vmstat-remote/kernel/workqueue.c
===================================================================
--- linux-vmstat-remote.orig/kernel/workqueue.c
+++ linux-vmstat-remote/kernel/workqueue.c
@@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
 	return 0;
 }
 
+
+/**
+ * schedule_on_each_cpumask - execute a function synchronously on each
+ * CPU in "cpumask", for those which are online.
+ *
+ * @func: the function to call
+ * @mask: the CPUs which to call function on
+ *
+ * schedule_on_each_cpu() executes @func on each specified CPU that is online,
+ * using the system workqueue and blocks until all such CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
+{
+	int cpu;
+	struct work_struct __percpu *works;
+	cpumask_var_t effmask;
+
+	works = alloc_percpu(struct work_struct);
+	if (!works)
+		return -ENOMEM;
+
+	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
+		free_percpu(works);
+		return -ENOMEM;
+	}
+
+	cpumask_and(effmask, cpumask, cpu_online_mask);
+
+	cpus_read_lock();
+
+	for_each_cpu(cpu, effmask) {
+		struct work_struct *work = per_cpu_ptr(works, cpu);
+
+		INIT_WORK(work, func);
+		schedule_work_on(cpu, work);
+	}
+
+	for_each_cpu(cpu, effmask)
+		flush_work(per_cpu_ptr(works, cpu));
+
+	cpus_read_unlock();
+	free_percpu(works);
+	free_cpumask_var(effmask);
+	return 0;
+}
+
 /**
  * execute_in_process_context - reliably execute the routine with user context
  * @fn:		the function to execute
Index: linux-vmstat-remote/include/linux/workqueue.h
===================================================================
--- linux-vmstat-remote.orig/include/linux/workqueue.h
+++ linux-vmstat-remote/include/linux/workqueue.h
@@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
 extern void drain_workqueue(struct workqueue_struct *wq);
 
 extern int schedule_on_each_cpu(work_func_t func);
+extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
 
 int execute_in_process_context(work_func_t fn, struct execute_work *);
 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
                   ` (2 preceding siblings ...)
  2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
@ 2023-05-30 14:52 ` Marcelo Tosatti
  2023-06-02 10:50   ` Michal Hocko
  3 siblings, 1 reply; 14+ messages in thread
From: Marcelo Tosatti @ 2023-05-30 14:52 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

The interruption caused by queueing work on nohz_full CPUs 
is undesirable for certain aplications.

Fix by not refreshing per-CPU stats of nohz_full CPUs. 

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -1877,12 +1877,31 @@ static void refresh_vm_stats(struct work
 	refresh_cpu_vm_stats(true);
 }
 
+#ifdef CONFIG_NO_HZ_FULL
+static inline const cpumask_t *tickless_cpumask(void)
+{
+	return tick_nohz_full_mask;
+}
+#else
+static cpumask_t empty_cpumask;
+static inline const cpumask_t *tickless_cpumask(void)
+{
+	return &empty_cpumask;
+}
+#endif
+
 int vmstat_refresh(struct ctl_table *table, int write,
 		   void *buffer, size_t *lenp, loff_t *ppos)
 {
 	long val;
 	int err;
 	int i;
+	cpumask_var_t dstmask;
+
+	if (!alloc_cpumask_var(&dstmask, GFP_KERNEL))
+		return -ENOMEM;
+
+	cpumask_andnot(dstmask, cpu_possible_mask, tickless_cpumask());
 
 	/*
 	 * The regular update, every sysctl_stat_interval, may come later
@@ -1896,7 +1915,9 @@ int vmstat_refresh(struct ctl_table *tab
 	 * transiently negative values, report an error here if any of
 	 * the stats is negative, so we know to go looking for imbalance.
 	 */
-	err = schedule_on_each_cpu(refresh_vm_stats);
+	err = schedule_on_each_cpumask(refresh_vm_stats, dstmask);
+	free_cpumask_var(dstmask);
+
 	if (err)
 		return err;
 	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
@ 2023-05-30 20:09   ` Andrew Morton
  2023-06-01 18:13     ` Marcelo Tosatti
  2023-06-02 10:48   ` Michal Hocko
  1 sibling, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2023-05-30 20:09 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	linux-kernel, linux-mm, Vlastimil Babka, Michal Hocko

On Tue, 30 May 2023 11:52:37 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:

> Add a schedule_on_each_cpumask function, equivalent to
> schedule_on_each_cpu but accepting a cpumask to operate.
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
> 
> Index: linux-vmstat-remote/kernel/workqueue.c
> ===================================================================
> --- linux-vmstat-remote.orig/kernel/workqueue.c
> +++ linux-vmstat-remote/kernel/workqueue.c
> @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
>  	return 0;
>  }
>  
> +
> +/**
> + * schedule_on_each_cpumask - execute a function synchronously on each
> + * CPU in "cpumask", for those which are online.
> + *
> + * @func: the function to call
> + * @mask: the CPUs which to call function on
> + *
> + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> + * using the system workqueue and blocks until all such CPUs have completed.
> + * schedule_on_each_cpu() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> +{
> +	int cpu;
> +	struct work_struct __percpu *works;
> +	cpumask_var_t effmask;
> +
> +	works = alloc_percpu(struct work_struct);
> +	if (!works)
> +		return -ENOMEM;
> +
> +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> +		free_percpu(works);
> +		return -ENOMEM;
> +	}
> +
> +	cpumask_and(effmask, cpumask, cpu_online_mask);
> +
> +	cpus_read_lock();
> +
> +	for_each_cpu(cpu, effmask) {

Should we check here that the cpu is still online?

> +		struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> +		INIT_WORK(work, func);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_cpu(cpu, effmask)
> +		flush_work(per_cpu_ptr(works, cpu));
> +
> +	cpus_read_unlock();
> +	free_percpu(works);
> +	free_cpumask_var(effmask);
> +	return 0;
> +}
> +
>  /**
>   * execute_in_process_context - reliably execute the routine with user context
>   * @fn:		the function to execute
> --- linux-vmstat-remote.orig/include/linux/workqueue.h
> +++ linux-vmstat-remote/include/linux/workqueue.h
> @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
>  extern void drain_workqueue(struct workqueue_struct *wq);
>  
>  extern int schedule_on_each_cpu(work_func_t func);
> +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);

May as well make schedule_on_each_cpu() call
schedule_on_each_cpumask()?  Save a bit of text, and they're hardly
performance-critical to that extent.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-05-30 20:09   ` Andrew Morton
@ 2023-06-01 18:13     ` Marcelo Tosatti
  0 siblings, 0 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2023-06-01 18:13 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	linux-kernel, linux-mm, Vlastimil Babka, Michal Hocko

On Tue, May 30, 2023 at 01:09:47PM -0700, Andrew Morton wrote:
> On Tue, 30 May 2023 11:52:37 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:
> 
> > Add a schedule_on_each_cpumask function, equivalent to
> > schedule_on_each_cpu but accepting a cpumask to operate.
> > 
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> > 
> > ---
> > 
> > Index: linux-vmstat-remote/kernel/workqueue.c
> > ===================================================================
> > --- linux-vmstat-remote.orig/kernel/workqueue.c
> > +++ linux-vmstat-remote/kernel/workqueue.c
> > @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
> >  	return 0;
> >  }
> >  
> > +
> > +/**
> > + * schedule_on_each_cpumask - execute a function synchronously on each
> > + * CPU in "cpumask", for those which are online.
> > + *
> > + * @func: the function to call
> > + * @mask: the CPUs which to call function on
> > + *
> > + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> > + * using the system workqueue and blocks until all such CPUs have completed.
> > + * schedule_on_each_cpu() is very slow.
> > + *
> > + * Return:
> > + * 0 on success, -errno on failure.
> > + */
> > +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> > +{
> > +	int cpu;
> > +	struct work_struct __percpu *works;
> > +	cpumask_var_t effmask;
> > +
> > +	works = alloc_percpu(struct work_struct);
> > +	if (!works)
> > +		return -ENOMEM;
> > +
> > +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> > +		free_percpu(works);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	cpumask_and(effmask, cpumask, cpu_online_mask);
> > +
> > +	cpus_read_lock();
> > +
> > +	for_each_cpu(cpu, effmask) {
> 
> Should we check here that the cpu is still online?
> 
> > +		struct work_struct *work = per_cpu_ptr(works, cpu);
> > +
> > +		INIT_WORK(work, func);
> > +		schedule_work_on(cpu, work);
> > +	}
> > +
> > +	for_each_cpu(cpu, effmask)
> > +		flush_work(per_cpu_ptr(works, cpu));
> > +
> > +	cpus_read_unlock();
> > +	free_percpu(works);
> > +	free_cpumask_var(effmask);
> > +	return 0;
> > +}
> > +
> >  /**
> >   * execute_in_process_context - reliably execute the routine with user context
> >   * @fn:		the function to execute
> > --- linux-vmstat-remote.orig/include/linux/workqueue.h
> > +++ linux-vmstat-remote/include/linux/workqueue.h
> > @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
> >  extern void drain_workqueue(struct workqueue_struct *wq);
> >  
> >  extern int schedule_on_each_cpu(work_func_t func);
> > +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
> 
> May as well make schedule_on_each_cpu() call
> schedule_on_each_cpumask()?  Save a bit of text, and they're hardly
> performance-critical to that extent.

Agree, will wait for Michal's review before resending -v2.


workqueue: add schedule_on_each_cpumask helper

Add a schedule_on_each_cpumask function, equivalent to
schedule_on_each_cpu but accepting a cpumask to operate.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

v2: - cpu_online_mask reference should happen with cpus_read_lock held	(Andrew Morton)
    - have schedule_on_each_cpu call _cpumask variant			(Andrew Morton)

Index: linux-vmstat-remote/kernel/workqueue.c
===================================================================
--- linux-vmstat-remote.orig/kernel/workqueue.c
+++ linux-vmstat-remote/kernel/workqueue.c
@@ -3431,27 +3431,56 @@ EXPORT_SYMBOL(cancel_delayed_work_sync);
  */
 int schedule_on_each_cpu(work_func_t func)
 {
+	return schedule_on_each_cpumask(func, cpu_possible_mask);
+}
+
+
+/**
+ * schedule_on_each_cpumask - execute a function synchronously on each
+ * CPU in "cpumask", for those which are online.
+ *
+ * @func: the function to call
+ * @mask: the CPUs which to call function on
+ *
+ * schedule_on_each_cpu() executes @func on each specified CPU that is online,
+ * using the system workqueue and blocks until all such CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
+{
 	int cpu;
 	struct work_struct __percpu *works;
+	cpumask_var_t effmask;
 
 	works = alloc_percpu(struct work_struct);
 	if (!works)
 		return -ENOMEM;
 
+	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
+		free_percpu(works);
+		return -ENOMEM;
+	}
+
 	cpus_read_lock();
 
-	for_each_online_cpu(cpu) {
+	cpumask_and(effmask, cpumask, cpu_online_mask);
+
+	for_each_cpu(cpu, effmask) {
 		struct work_struct *work = per_cpu_ptr(works, cpu);
 
 		INIT_WORK(work, func);
 		schedule_work_on(cpu, work);
 	}
 
-	for_each_online_cpu(cpu)
+	for_each_cpu(cpu, effmask)
 		flush_work(per_cpu_ptr(works, cpu));
 
 	cpus_read_unlock();
 	free_percpu(works);
+	free_cpumask_var(effmask);
 	return 0;
 }
 
Index: linux-vmstat-remote/include/linux/workqueue.h
===================================================================
--- linux-vmstat-remote.orig/include/linux/workqueue.h
+++ linux-vmstat-remote/include/linux/workqueue.h
@@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
 extern void drain_workqueue(struct workqueue_struct *wq);
 
 extern int schedule_on_each_cpu(work_func_t func);
+extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
 
 int execute_in_process_context(work_func_t fn, struct execute_work *);
 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
@ 2023-06-02 10:39   ` Michal Hocko
  2023-06-02 16:57     ` Marcelo Tosatti
  0 siblings, 1 reply; 14+ messages in thread
From: Michal Hocko @ 2023-06-02 10:39 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Tue 30-05-23 11:52:36, Marcelo Tosatti wrote:
> @@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
>  	for_each_online_cpu(cpu) {
>  		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
>  
> +		/*
> +		 * Skip periodic updates for nohz full CPUs.
> +		 * Any callers who need precise values should use
> +		 * a snapshot of the per-CPU counters, or use the global
> +		 * counters with measures to handle errors up to
> +		 * thresholds (see calculate_normal_threshold).
> +		 */
> +		if (tick_nohz_full_cpu(cpu))
> +			continue;

In other code path we have used cpu_is_isolated, is there any reason to
diverge from that here? Isn't this effectivelly the same kind of
problem?

> +
>  		if (!delayed_work_pending(dw) && need_update(cpu))
>  			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
>  
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
  2023-05-30 20:09   ` Andrew Morton
@ 2023-06-02 10:48   ` Michal Hocko
  2023-06-02 17:04     ` Marcelo Tosatti
  1 sibling, 1 reply; 14+ messages in thread
From: Michal Hocko @ 2023-06-02 10:48 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka, Tejun Heo,
	Lai Jiangshan

You should be CCing WQ maintainers on changes like this one (now added).

On Tue 30-05-23 11:52:37, Marcelo Tosatti wrote:
> Add a schedule_on_each_cpumask function, equivalent to
> schedule_on_each_cpu but accepting a cpumask to operate.

IMHO it is preferable to add a new function along with its user so that
the usecase is more clear.
 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
> 
> Index: linux-vmstat-remote/kernel/workqueue.c
> ===================================================================
> --- linux-vmstat-remote.orig/kernel/workqueue.c
> +++ linux-vmstat-remote/kernel/workqueue.c
> @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
>  	return 0;
>  }
>  
> +
> +/**
> + * schedule_on_each_cpumask - execute a function synchronously on each
> + * CPU in "cpumask", for those which are online.
> + *
> + * @func: the function to call
> + * @mask: the CPUs which to call function on
> + *
> + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> + * using the system workqueue and blocks until all such CPUs have completed.
> + * schedule_on_each_cpu() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> +{
> +	int cpu;
> +	struct work_struct __percpu *works;
> +	cpumask_var_t effmask;
> +
> +	works = alloc_percpu(struct work_struct);
> +	if (!works)
> +		return -ENOMEM;
> +
> +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> +		free_percpu(works);
> +		return -ENOMEM;
> +	}
> +
> +	cpumask_and(effmask, cpumask, cpu_online_mask);
> +
> +	cpus_read_lock();
> +
> +	for_each_cpu(cpu, effmask) {

Is the cpu_online_mask dance really necessary? Why cannot you simply do
for_each_online_cpu here? flush_work on unqueued work item should just
return, no?

Also there is no synchronization with the cpu hotplug so cpu_online_mask
can change under your feet so this construct seem unsafe to me.

> +		struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> +		INIT_WORK(work, func);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_cpu(cpu, effmask)
> +		flush_work(per_cpu_ptr(works, cpu));
> +
> +	cpus_read_unlock();
> +	free_percpu(works);
> +	free_cpumask_var(effmask);
> +	return 0;
> +}
> +
>  /**
>   * execute_in_process_context - reliably execute the routine with user context
>   * @fn:		the function to execute
> Index: linux-vmstat-remote/include/linux/workqueue.h
> ===================================================================
> --- linux-vmstat-remote.orig/include/linux/workqueue.h
> +++ linux-vmstat-remote/include/linux/workqueue.h
> @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
>  extern void drain_workqueue(struct workqueue_struct *wq);
>  
>  extern int schedule_on_each_cpu(work_func_t func);
> +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
>  
>  int execute_in_process_context(work_func_t fn, struct execute_work *);
>  
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
@ 2023-06-02 10:50   ` Michal Hocko
  0 siblings, 0 replies; 14+ messages in thread
From: Michal Hocko @ 2023-06-02 10:50 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Tue 30-05-23 11:52:38, Marcelo Tosatti wrote:
> The interruption caused by queueing work on nohz_full CPUs 
> is undesirable for certain aplications.
> 
> Fix by not refreshing per-CPU stats of nohz_full CPUs. 
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
> 
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -1877,12 +1877,31 @@ static void refresh_vm_stats(struct work
>  	refresh_cpu_vm_stats(true);
>  }
>  
> +#ifdef CONFIG_NO_HZ_FULL
> +static inline const cpumask_t *tickless_cpumask(void)
> +{
> +	return tick_nohz_full_mask;
> +}
> +#else
> +static cpumask_t empty_cpumask;
> +static inline const cpumask_t *tickless_cpumask(void)
> +{
> +	return &empty_cpumask;
> +}
> +#endif
> +
>  int vmstat_refresh(struct ctl_table *table, int write,
>  		   void *buffer, size_t *lenp, loff_t *ppos)
>  {
>  	long val;
>  	int err;
>  	int i;
> +	cpumask_var_t dstmask;
> +
> +	if (!alloc_cpumask_var(&dstmask, GFP_KERNEL))
> +		return -ENOMEM;
> +
> +	cpumask_andnot(dstmask, cpu_possible_mask, tickless_cpumask());

Similar feedback to the patch 2. wouldn't it make more sense to opencode
schedule_on_each_cpu here and use cpu_is_isolated instead?

>  
>  	/*
>  	 * The regular update, every sysctl_stat_interval, may come later
> @@ -1896,7 +1915,9 @@ int vmstat_refresh(struct ctl_table *tab
>  	 * transiently negative values, report an error here if any of
>  	 * the stats is negative, so we know to go looking for imbalance.
>  	 */
> -	err = schedule_on_each_cpu(refresh_vm_stats);
> +	err = schedule_on_each_cpumask(refresh_vm_stats, dstmask);
> +	free_cpumask_var(dstmask);
> +
>  	if (err)
>  		return err;
>  	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot
  2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
@ 2023-06-02 10:53   ` Michal Hocko
  0 siblings, 0 replies; 14+ messages in thread
From: Michal Hocko @ 2023-06-02 10:53 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

Andrew, as I've stated in previously, this sounds like a reasonable
standalone fix worth taking independently of the rest.

On Tue 30-05-23 11:52:35, Marcelo Tosatti wrote:
> A customer provided evidence indicating that a process
> was stalled in direct reclaim:
> 
>  - The process was trapped in throttle_direct_reclaim().
>    The function wait_event_killable() was called to wait condition     
>    allow_direct_reclaim(pgdat) for current node to be true.     
>    The allow_direct_reclaim(pgdat) examined the number of free pages     
>    on the node by zone_page_state() which just returns value in     
>    zone->vm_stat[NR_FREE_PAGES].     
>                                                 
>  - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
>    However, the freelist on this node was not empty.     
>                            
>  - This inconsistent of vmstat value was caused by percpu vmstat on     
>    nohz_full cpus. Every increment/decrement of vmstat is performed     
>    on percpu vmstat counter at first, then pooled diffs are cumulated     
>    to the zone's vmstat counter in timely manner. However, on nohz_full     
>    cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
>    diffs were not cumulated once the cpu had no event on it so that     
>    the cpu started sleeping infinitely.                       
>    I checked percpu vmstat and found there were total 69 counts not         
>    cumulated to the zone's vmstat counter yet.     
>                                          
>  - In this situation, kswapd did not help the trapped process.     
>    In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
>    of free pages on the node by zone_page_state_snapshot() which     
>    checks pending counts on percpu vmstat.     
>    Therefore kswapd could know there were 69 free pages correctly.     
>    Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
>    69 was greater than 32 as high watermark.     
> 
> Change allow_direct_reclaim to use zone_page_state_snapshot, which
> allows a more precise version of the vmstat counters to be used.
> 
> allow_direct_reclaim will only be called from try_to_free_pages,
> which is not a hot path.
> 
> Testing: Due to difficulties accessing the system, it has not been
> possible for the reproducer to test the patch (however its
> clear from available data and analysis that it should fix it).
> 
> Reviewed-by: Michal Hocko <mhocko@suse.com>
> Reviewed-by: Aaron Tomlin <atomlin@atomlin.com>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
> 
> Index: linux-vmstat-remote/mm/vmscan.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmscan.c
> +++ linux-vmstat-remote/mm/vmscan.c
> @@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data
>  			continue;
>  
>  		pfmemalloc_reserve += min_wmark_pages(zone);
> -		free_pages += zone_page_state(zone, NR_FREE_PAGES);
> +		free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
>  	}
>  
>  	/* If there are no reserves (unexpected config) then do not throttle */
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-02 10:39   ` Michal Hocko
@ 2023-06-02 16:57     ` Marcelo Tosatti
  0 siblings, 0 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 16:57 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Fri, Jun 02, 2023 at 12:39:04PM +0200, Michal Hocko wrote:
> On Tue 30-05-23 11:52:36, Marcelo Tosatti wrote:
> > @@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
> >  	for_each_online_cpu(cpu) {
> >  		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
> >  
> > +		/*
> > +		 * Skip periodic updates for nohz full CPUs.
> > +		 * Any callers who need precise values should use
> > +		 * a snapshot of the per-CPU counters, or use the global
> > +		 * counters with measures to handle errors up to
> > +		 * thresholds (see calculate_normal_threshold).
> > +		 */
> > +		if (tick_nohz_full_cpu(cpu))
> > +			continue;
> 
> In other code path we have used cpu_is_isolated, is there any reason to
> diverge from that here? Isn't this effectivelly the same kind of
> problem?

Changed to cpu_is_isolated, thanks.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-06-02 10:48   ` Michal Hocko
@ 2023-06-02 17:04     ` Marcelo Tosatti
  2023-06-05  7:12       ` Michal Hocko
  0 siblings, 1 reply; 14+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 17:04 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka, Tejun Heo,
	Lai Jiangshan

On Fri, Jun 02, 2023 at 12:48:23PM +0200, Michal Hocko wrote:
> You should be CCing WQ maintainers on changes like this one (now added).
> 
> On Tue 30-05-23 11:52:37, Marcelo Tosatti wrote:
> > Add a schedule_on_each_cpumask function, equivalent to
> > schedule_on_each_cpu but accepting a cpumask to operate.
> 
> IMHO it is preferable to add a new function along with its user so that
> the usecase is more clear.
>  
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> > 
> > ---
> > 
> > Index: linux-vmstat-remote/kernel/workqueue.c
> > ===================================================================
> > --- linux-vmstat-remote.orig/kernel/workqueue.c
> > +++ linux-vmstat-remote/kernel/workqueue.c
> > @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
> >  	return 0;
> >  }
> >  
> > +
> > +/**
> > + * schedule_on_each_cpumask - execute a function synchronously on each
> > + * CPU in "cpumask", for those which are online.
> > + *
> > + * @func: the function to call
> > + * @mask: the CPUs which to call function on
> > + *
> > + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> > + * using the system workqueue and blocks until all such CPUs have completed.
> > + * schedule_on_each_cpu() is very slow.
> > + *
> > + * Return:
> > + * 0 on success, -errno on failure.
> > + */
> > +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> > +{
> > +	int cpu;
> > +	struct work_struct __percpu *works;
> > +	cpumask_var_t effmask;
> > +
> > +	works = alloc_percpu(struct work_struct);
> > +	if (!works)
> > +		return -ENOMEM;
> > +
> > +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> > +		free_percpu(works);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	cpumask_and(effmask, cpumask, cpu_online_mask);
> > +
> > +	cpus_read_lock();
> > +
> > +	for_each_cpu(cpu, effmask) {
> 
> Is the cpu_online_mask dance really necessary? 

> Why cannot you simply do for_each_online_cpu here? 

Are you suggesting to do: 

	for_each_online_cpu(cpu) {
		if cpu is not in cpumask
			continue;
		...
	}

This does not seem efficient.

> flush_work on unqueued work item should just
> return, no?

Apparently not:

commit 0e8d6a9336b487a1dd6f1991ff376e669d4c87c6
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Wed Apr 12 22:07:28 2017 +0200

    workqueue: Provide work_on_cpu_safe()
    
    work_on_cpu() is not protected against CPU hotplug. For code which requires
    to be either executed on an online CPU or to fail if the CPU is not
    available the callsite would have to protect against CPU hotplug.
    
    Provide a function which does get/put_online_cpus() around the call to
    work_on_cpu() and fails the call with -ENODEV if the target CPU is not
    online.

> Also there is no synchronization with the cpu hotplug so cpu_online_mask
> can change under your feet so this construct seem unsafe to me.

Yes, fixed by patch in response to Andrew's comment.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper
  2023-06-02 17:04     ` Marcelo Tosatti
@ 2023-06-05  7:12       ` Michal Hocko
  0 siblings, 0 replies; 14+ messages in thread
From: Michal Hocko @ 2023-06-05  7:12 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka, Tejun Heo,
	Lai Jiangshan

On Fri 02-06-23 14:04:28, Marcelo Tosatti wrote:
> On Fri, Jun 02, 2023 at 12:48:23PM +0200, Michal Hocko wrote:
[...]
> > > +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> > > +		free_percpu(works);
> > > +		return -ENOMEM;
> > > +	}
> > > +
> > > +	cpumask_and(effmask, cpumask, cpu_online_mask);
> > > +
> > > +	cpus_read_lock();
> > > +
> > > +	for_each_cpu(cpu, effmask) {
> > 
> > Is the cpu_online_mask dance really necessary? 
> 
> > Why cannot you simply do for_each_online_cpu here? 
> 
> Are you suggesting to do: 
> 
> 	for_each_online_cpu(cpu) {
> 		if cpu is not in cpumask
> 			continue;
> 		...
> 	}
> 
> This does not seem efficient.

Are you sure this is less sufficient than a memory allocation?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-06-05  7:13 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-30 14:52 [PATCH 0/4] vmstat bug fixes for nohz_full CPUs Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 1/4] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
2023-06-02 10:53   ` Michal Hocko
2023-05-30 14:52 ` [PATCH 2/4] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
2023-06-02 10:39   ` Michal Hocko
2023-06-02 16:57     ` Marcelo Tosatti
2023-05-30 14:52 ` [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Marcelo Tosatti
2023-05-30 20:09   ` Andrew Morton
2023-06-01 18:13     ` Marcelo Tosatti
2023-06-02 10:48   ` Michal Hocko
2023-06-02 17:04     ` Marcelo Tosatti
2023-06-05  7:12       ` Michal Hocko
2023-05-30 14:52 ` [PATCH 4/4] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
2023-06-02 10:50   ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).