* [PATCH] memcg: sync flush only if periodic flush is delayed
@ 2022-03-04 18:40 Shakeel Butt
2022-03-04 18:53 ` Ivan Babrou
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Shakeel Butt @ 2022-03-04 18:40 UTC (permalink / raw)
To: Michal Koutný, Johannes Weiner, Michal Hocko, Roman Gushchin
Cc: Ivan Babrou, Frank Hofmann, Andrew Morton,
cgroups-u79uwXL29TY76Z2rM5mHXA, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Shakeel Butt, Daniel Dao,
stable-u79uwXL29TY76Z2rM5mHXA
Daniel Dao has reported [1] a regression on workloads that may trigger
a lot of refaults (anon and file). The underlying issue is that flushing
rstat is expensive. Although rstat flush are batched with (nr_cpus *
MEMCG_BATCH) stat updates, it seems like there are workloads which
genuinely do stat updates larger than batch value within short amount of
time. Since the rstat flush can happen in the performance critical
codepaths like page faults, such workload can suffer greatly.
This patch fixes this regression by making the rstat flushing
conditional in the performance critical codepaths. More specifically,
the kernel relies on the async periodic rstat flusher to flush the stats
and only if the periodic flusher is delayed by more than twice the
amount of its normal time window then the kernel allows rstat flushing
from the performance critical codepaths.
Now the question: what are the side-effects of this change? The worst
that can happen is the refault codepath will see 4sec old lruvec stats
and may cause false (or missed) activations of the refaulted page which
may under-or-overestimate the workingset size. Though that is not very
concerning as the kernel can already miss or do false activations.
There are two more codepaths whose flushing behavior is not changed by
this patch and we may need to come to them in future. One is the
writeback stats used by dirty throttling and second is the deactivation
heuristic in the reclaim. For now keeping an eye on them and if there is
report of regression due to these codepaths, we will reevaluate then.
Link: https://lore.kernel.org/all/CA+wXwBSyO87ZX5PVwdHm-=dBjZYECGmfnydUicUyrQqndgX2MQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org [1]
Fixes: 1f828223b799 ("memcg: flush lruvec stats in the refault")
Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Reported-by: Daniel Dao <dqminh-lDpJ742SOEtZroRs9YW3xA@public.gmane.org>
Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
---
include/linux/memcontrol.h | 5 +++++
mm/memcontrol.c | 12 +++++++++++-
mm/workingset.c | 2 +-
3 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a68dce3873fc..89b14729d59f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1012,6 +1012,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
}
void mem_cgroup_flush_stats(void);
+void mem_cgroup_flush_stats_delayed(void);
void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
int val);
@@ -1455,6 +1456,10 @@ static inline void mem_cgroup_flush_stats(void)
{
}
+static inline void mem_cgroup_flush_stats_delayed(void)
+{
+}
+
static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec,
enum node_stat_item idx, int val)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f79bb3f25ce4..edfb337e6948 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -587,6 +587,9 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
static DEFINE_SPINLOCK(stats_flush_lock);
static DEFINE_PER_CPU(unsigned int, stats_updates);
static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
+static u64 flush_next_time;
+
+#define FLUSH_TIME (2UL*HZ)
/*
* Accessors to ensure that preemption is disabled on PREEMPT_RT because it can
@@ -637,6 +640,7 @@ static void __mem_cgroup_flush_stats(void)
if (!spin_trylock_irqsave(&stats_flush_lock, flag))
return;
+ flush_next_time = jiffies_64 + 2*FLUSH_TIME;
cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup);
atomic_set(&stats_flush_threshold, 0);
spin_unlock_irqrestore(&stats_flush_lock, flag);
@@ -648,10 +652,16 @@ void mem_cgroup_flush_stats(void)
__mem_cgroup_flush_stats();
}
+void mem_cgroup_flush_stats_delayed(void)
+{
+ if (rstat_flush_time && time_after64(jiffies_64, flush_next_time))
+ mem_cgroup_flush_stats();
+}
+
static void flush_memcg_stats_dwork(struct work_struct *w)
{
__mem_cgroup_flush_stats();
- queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ);
+ queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
}
/**
diff --git a/mm/workingset.c b/mm/workingset.c
index 8a3828acc0bf..592569a8974c 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -355,7 +355,7 @@ void workingset_refault(struct folio *folio, void *shadow)
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats_delayed();
/*
* Compare the distance to the existing workingset size. We
* don't activate pages that couldn't stay resident even if
--
2.35.1.616.g0bdcbb4464-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH] memcg: sync flush only if periodic flush is delayed
2022-03-04 18:40 [PATCH] memcg: sync flush only if periodic flush is delayed Shakeel Butt
@ 2022-03-04 18:53 ` Ivan Babrou
2022-03-07 2:44 ` Andrew Morton
[not found] ` <20220304184040.1304781-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2 siblings, 0 replies; 9+ messages in thread
From: Ivan Babrou @ 2022-03-04 18:53 UTC (permalink / raw)
To: Shakeel Butt
Cc: Michal Koutný, Johannes Weiner, Michal Hocko, Roman Gushchin,
Frank Hofmann, Andrew Morton, cgroups, Linux MM, linux-kernel,
Daniel Dao, stable
On Fri, Mar 4, 2022 at 10:40 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> Daniel Dao has reported [1] a regression on workloads that may trigger
> a lot of refaults (anon and file). The underlying issue is that flushing
> rstat is expensive. Although rstat flush are batched with (nr_cpus *
> MEMCG_BATCH) stat updates, it seems like there are workloads which
> genuinely do stat updates larger than batch value within short amount of
> time. Since the rstat flush can happen in the performance critical
> codepaths like page faults, such workload can suffer greatly.
>
> This patch fixes this regression by making the rstat flushing
> conditional in the performance critical codepaths. More specifically,
> the kernel relies on the async periodic rstat flusher to flush the stats
> and only if the periodic flusher is delayed by more than twice the
> amount of its normal time window then the kernel allows rstat flushing
> from the performance critical codepaths.
>
> Now the question: what are the side-effects of this change? The worst
> that can happen is the refault codepath will see 4sec old lruvec stats
> and may cause false (or missed) activations of the refaulted page which
> may under-or-overestimate the workingset size. Though that is not very
> concerning as the kernel can already miss or do false activations.
>
> There are two more codepaths whose flushing behavior is not changed by
> this patch and we may need to come to them in future. One is the
> writeback stats used by dirty throttling and second is the deactivation
> heuristic in the reclaim. For now keeping an eye on them and if there is
> report of regression due to these codepaths, we will reevaluate then.
>
> Link: https://lore.kernel.org/all/CA+wXwBSyO87ZX5PVwdHm-=dBjZYECGmfnydUicUyrQqndgX2MQ@mail.gmail.com [1]
> Fixes: 1f828223b799 ("memcg: flush lruvec stats in the refault")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Reported-by: Daniel Dao <dqminh@cloudflare.com>
> Cc: <stable@vger.kernel.org>
> ---
See my testing results here:
https://lore.kernel.org/linux-mm/CABWYdi2usrWOnOnmKYYvuFpE=yJmgtq4a7u6FiGJGJkskv+eVQ@mail.gmail.com/
Tested-by: Ivan Babrou <ivan@cloudflare.com>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] memcg: sync flush only if periodic flush is delayed
2022-03-04 18:40 [PATCH] memcg: sync flush only if periodic flush is delayed Shakeel Butt
2022-03-04 18:53 ` Ivan Babrou
@ 2022-03-07 2:44 ` Andrew Morton
2022-03-07 3:06 ` Shakeel Butt
[not found] ` <20220304184040.1304781-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2022-03-07 2:44 UTC (permalink / raw)
To: Shakeel Butt
Cc: Michal Koutný , Johannes Weiner, Michal Hocko,
Roman Gushchin, Ivan Babrou, Frank Hofmann, cgroups, linux-mm,
linux-kernel, Daniel Dao, stable
On Fri, 4 Mar 2022 18:40:40 +0000 Shakeel Butt <shakeelb@google.com> wrote:
> Daniel Dao has reported [1] a regression on workloads that may trigger
> a lot of refaults (anon and file). The underlying issue is that flushing
> rstat is expensive. Although rstat flush are batched with (nr_cpus *
> MEMCG_BATCH) stat updates, it seems like there are workloads which
> genuinely do stat updates larger than batch value within short amount of
> time. Since the rstat flush can happen in the performance critical
> codepaths like page faults, such workload can suffer greatly.
>
> This patch fixes this regression by making the rstat flushing
> conditional in the performance critical codepaths. More specifically,
> the kernel relies on the async periodic rstat flusher to flush the stats
> and only if the periodic flusher is delayed by more than twice the
> amount of its normal time window then the kernel allows rstat flushing
> from the performance critical codepaths.
>
> Now the question: what are the side-effects of this change? The worst
> that can happen is the refault codepath will see 4sec old lruvec stats
> and may cause false (or missed) activations of the refaulted page which
> may under-or-overestimate the workingset size. Though that is not very
> concerning as the kernel can already miss or do false activations.
>
> There are two more codepaths whose flushing behavior is not changed by
> this patch and we may need to come to them in future. One is the
> writeback stats used by dirty throttling and second is the deactivation
> heuristic in the reclaim. For now keeping an eye on them and if there is
> report of regression due to these codepaths, we will reevaluate then.
>
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
>
> ...
>
> @@ -648,10 +652,16 @@ void mem_cgroup_flush_stats(void)
> __mem_cgroup_flush_stats();
> }
>
> +void mem_cgroup_flush_stats_delayed(void)
> +{
> + if (rstat_flush_time && time_after64(jiffies_64, flush_next_time))
rstat_flush_time isn't defined for me and my googling indicates this is
the first time the symbol has been used in the history of the world.
I'm stumped.
> + mem_cgroup_flush_stats();
> +}
> +
>
> ...
>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] memcg: sync flush only if periodic flush is delayed
2022-03-07 2:44 ` Andrew Morton
@ 2022-03-07 3:06 ` Shakeel Butt
0 siblings, 0 replies; 9+ messages in thread
From: Shakeel Butt @ 2022-03-07 3:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Michal Koutný, Johannes Weiner, Michal Hocko, Roman Gushchin,
Ivan Babrou, Frank Hofmann, cgroups, linux-mm, linux-kernel,
Daniel Dao, stable
On Sun, Mar 6, 2022 at 6:44 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Fri, 4 Mar 2022 18:40:40 +0000 Shakeel Butt <shakeelb@google.com> wrote:
>
> > Daniel Dao has reported [1] a regression on workloads that may trigger
> > a lot of refaults (anon and file). The underlying issue is that flushing
> > rstat is expensive. Although rstat flush are batched with (nr_cpus *
> > MEMCG_BATCH) stat updates, it seems like there are workloads which
> > genuinely do stat updates larger than batch value within short amount of
> > time. Since the rstat flush can happen in the performance critical
> > codepaths like page faults, such workload can suffer greatly.
> >
> > This patch fixes this regression by making the rstat flushing
> > conditional in the performance critical codepaths. More specifically,
> > the kernel relies on the async periodic rstat flusher to flush the stats
> > and only if the periodic flusher is delayed by more than twice the
> > amount of its normal time window then the kernel allows rstat flushing
> > from the performance critical codepaths.
> >
> > Now the question: what are the side-effects of this change? The worst
> > that can happen is the refault codepath will see 4sec old lruvec stats
> > and may cause false (or missed) activations of the refaulted page which
> > may under-or-overestimate the workingset size. Though that is not very
> > concerning as the kernel can already miss or do false activations.
> >
> > There are two more codepaths whose flushing behavior is not changed by
> > this patch and we may need to come to them in future. One is the
> > writeback stats used by dirty throttling and second is the deactivation
> > heuristic in the reclaim. For now keeping an eye on them and if there is
> > report of regression due to these codepaths, we will reevaluate then.
> >
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> >
> > ...
> >
> > @@ -648,10 +652,16 @@ void mem_cgroup_flush_stats(void)
> > __mem_cgroup_flush_stats();
> > }
> >
> > +void mem_cgroup_flush_stats_delayed(void)
> > +{
> > + if (rstat_flush_time && time_after64(jiffies_64, flush_next_time))
>
> rstat_flush_time isn't defined for me and my googling indicates this is
> the first time the symbol has been used in the history of the world.
> I'm stumped.
>
Oh sorry about that. I thought I renamed all instances of
"rstat_flush_time" to "flush_next_time" before sending out the email.
Please just remove "rstat_flush_time &&" from the if-condition.
thanks,
Shakeel
^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <20220304184040.1304781-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>]
* Re: [PATCH] memcg: sync flush only if periodic flush is delayed
[not found] ` <20220304184040.1304781-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2022-03-11 16:00 ` Michal Koutný
[not found] ` <20220311160051.GA24796-9OudH3eul5jcvrawFnH+a6VXKuFTiq87@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Michal Koutný @ 2022-03-11 16:00 UTC (permalink / raw)
To: Shakeel Butt
Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Ivan Babrou,
Frank Hofmann, Andrew Morton, cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Daniel Dao,
stable-u79uwXL29TY76Z2rM5mHXA
Hello.
TL;DR rstats are slow but accurate on reader side. To tackle the
performance regression no flush seems simpler than this patch.
So, I've made an overview for myself what were the relevant changes with
rstat introduction.
The amount of work is:
- before
R: O(1)
W: O(tree_depth)
- after
R: O(nr_cpus * nr_cgroups(subtree) * nr_counters)
W: O(tree_depth)
That doesn't look like a positive change especially on the reader side.
(In theory, the reader's work would be amortized but as the original
report here shows, not all workloads are diluting the flushes
sufficiently. [1])
The benefit this was traded for was the greater accuracy, the possible
error is:
- before
- O(nr_cpus * nr_cgroups(subtree) * MEMCG_CHARGE_BATCH) (1)
- after
O(nr_cpus * MEMCG_CHARGE_BATCH) // sync. flush
or
O(flush_period * max_cr) // periodic flush only (2)
// max_cr is per-counter max change rate
So we could argue that if the pre-rstat kernels did just fine with the
error (1), they would not be worse with periodic flush if we can compare
(1) and (2).
On Fri, Mar 04, 2022 at 06:40:40PM +0000, Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> This patch fixes this regression by making the rstat flushing
> conditional in the performance critical codepaths. More specifically,
> the kernel relies on the async periodic rstat flusher to flush the stats
> and only if the periodic flusher is delayed by more than twice the
> amount of its normal time window then the kernel allows rstat flushing
> from the performance critical codepaths.
I'm not sure whether your patch attempts to solve the problem of
(a) periodic flush getting stuck or (b) limiting error on refault path.
If it's (a), it should be tackled more systematically (dedicated wq?).
If it's (b), why not just rely on periodic flush (self answer: (1) and
(2) comparison is workload dependent).
> Now the question: what are the side-effects of this change? The worst
> that can happen is the refault codepath will see 4sec old lruvec stats
> and may cause false (or missed) activations of the refaulted page which
> may under-or-overestimate the workingset size. Though that is not very
> concerning as the kernel can already miss or do false activations.
We can't argue what's the effect of periodic only flushing so this
newly introduced factor would inherit that too. I find it superfluous.
Michal
[1] This is worth looking at in more detail.
From the flush condition we have
cr * Δt = nr_cpus * MEMCG_CHARGE_BATCH
where Δt is time between flushes and cr is global change rate.
cr composes of all updates together (corresponds to stats_updates in
memcg_rstat_updated(), max_cr is change rate per counter)
cr = Σ cr_i <= nr_counters * max_cr
By combining these two we get shortest time between flushes:
cr * Δt <= nr_counters * max_cr * Δt
nr_cpus * MEMCG_CHARGE_BATCH <= nr_counters * max_cr * Δt
Δt >= (nr_cpus * MEMCG_CHARGE_BATCH) / (nr_counters * max_cr)
We are interested in
R_amort = flush_work / Δt
which is
R_amort <= flush_work * nr_counters * max_cr / (nr_cpus * MEMCG_CHARGE_BATCH)
R_amort: O( nr_cpus * nr_cgroups(subtree) * nr_counters * (nr_counters * max_cr) / (nr_cpus * MEMCG_CHARGE_BATCH) )
R_amort: O( nr_cgroups(subtree) * nr_counters^2 * max_cr) / (MEMCG_CHARGE_BATCH) )
The square looks interesting given there are already tens of counters.
(As data from Ivan have shown, we can hardly restore the pre-rstat
performance on the read side even with mere mod_delayed_work().)
This is what you partially solved with introduction of NR_MEMCG_EVENTS
but the stats_updates was still sum of all events, so the flush might
have still triggered too frequently.
Maybe that would be better long-term approach, splitting into accurate
and approximate counters and reflect that in the error estimator stats_updates.
Or some other optimization of mem_cgroup_css_rstat_flush().
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-03-16 16:26 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-04 18:40 [PATCH] memcg: sync flush only if periodic flush is delayed Shakeel Butt
2022-03-04 18:53 ` Ivan Babrou
2022-03-07 2:44 ` Andrew Morton
2022-03-07 3:06 ` Shakeel Butt
[not found] ` <20220304184040.1304781-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2022-03-11 16:00 ` Michal Koutný
[not found] ` <20220311160051.GA24796-9OudH3eul5jcvrawFnH+a6VXKuFTiq87@public.gmane.org>
2022-03-12 19:07 ` Shakeel Butt
[not found] ` <20220312190715.cx4aznnzf6zdp7wv-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2022-03-14 12:57 ` Michal Koutný
2022-03-14 12:57 ` Michal Koutný
[not found] ` <20220314125709.GA12347-9OudH3eul5jcvrawFnH+a6VXKuFTiq87@public.gmane.org>
2022-03-16 16:26 ` Shakeel Butt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox