From: Michal Hocko <mhocko@suse.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: cgroups@vger.kernel.org, linux-mm@kvack.org,
"Andrew Morton" <akpm@linux-foundation.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Vladimir Davydov" <vdavydov.dev@gmail.com>,
"Waiman Long" <longman@redhat.com>,
"Roman Gushchin" <guro@fb.com>
Subject: Re: [PATCH v5 3/6] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT where needed.
Date: Mon, 28 Feb 2022 09:05:45 +0100 [thread overview]
Message-ID: <YhyCWQYL8vxRSLrd@dhcp22.suse.cz> (raw)
In-Reply-To: <20220226204144.1008339-4-bigeasy@linutronix.de>
On Sat 26-02-22 21:41:41, Sebastian Andrzej Siewior wrote:
> The per-CPU counter are modified with the non-atomic modifier. The
> consistency is ensured by disabling interrupts for the update.
> On non PREEMPT_RT configuration this works because acquiring a
> spinlock_t typed lock with the _irq() suffix disables interrupts. On
> PREEMPT_RT configurations the RMW operation can be interrupted.
>
> Another problem is that mem_cgroup_swapout() expects to be invoked with
> disabled interrupts because the caller has to acquire a spinlock_t which
> is acquired with disabled interrupts. Since spinlock_t never disables
> interrupts on PREEMPT_RT the interrupts are never disabled at this
> point.
>
> The code is never called from in_irq() context on PREEMPT_RT therefore
> disabling preemption during the update is sufficient on PREEMPT_RT.
> The sections which explicitly disable interrupts can remain on
> PREEMPT_RT because the sections remain short and they don't involve
> sleeping locks (memcg_check_events() is doing nothing on PREEMPT_RT).
>
> Disable preemption during update of the per-CPU variables which do not
> explicitly disable interrupts.
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com
Acked-by: Michal Hocko <mhocko@suse.com>
TBH I am not a fan of the counter special casing for the debugging
enabled warnings but I do not feel strong enough to push you trhough an
additional version round.
Thanks!
> ---
> mm/memcontrol.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 0b5117ed2ae08..238ea77aade5d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -630,6 +630,35 @@ static DEFINE_SPINLOCK(stats_flush_lock);
> static DEFINE_PER_CPU(unsigned int, stats_updates);
> static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
>
> +/*
> + * Accessors to ensure that preemption is disabled on PREEMPT_RT because it can
> + * not rely on this as part of an acquired spinlock_t lock. These functions are
> + * never used in hardirq context on PREEMPT_RT and therefore disabling preemtion
> + * is sufficient.
> + */
> +static void memcg_stats_lock(void)
> +{
> +#ifdef CONFIG_PREEMPT_RT
> + preempt_disable();
> +#else
> + VM_BUG_ON(!irqs_disabled());
> +#endif
> +}
> +
> +static void __memcg_stats_lock(void)
> +{
> +#ifdef CONFIG_PREEMPT_RT
> + preempt_disable();
> +#endif
> +}
> +
> +static void memcg_stats_unlock(void)
> +{
> +#ifdef CONFIG_PREEMPT_RT
> + preempt_enable();
> +#endif
> +}
> +
> static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> {
> unsigned int x;
> @@ -706,6 +735,27 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
> pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
> memcg = pn->memcg;
>
> + /*
> + * The caller from rmap relay on disabled preemption becase they never
> + * update their counter from in-interrupt context. For these two
> + * counters we check that the update is never performed from an
> + * interrupt context while other caller need to have disabled interrupt.
> + */
> + __memcg_stats_lock();
> + if (IS_ENABLED(CONFIG_DEBUG_VM) && !IS_ENABLED(CONFIG_PREEMPT_RT)) {
> + switch (idx) {
> + case NR_ANON_MAPPED:
> + case NR_FILE_MAPPED:
> + case NR_ANON_THPS:
> + case NR_SHMEM_PMDMAPPED:
> + case NR_FILE_PMDMAPPED:
> + WARN_ON_ONCE(!in_task());
> + break;
> + default:
> + WARN_ON_ONCE(!irqs_disabled());
> + }
> + }
> +
> /* Update memcg */
> __this_cpu_add(memcg->vmstats_percpu->state[idx], val);
>
> @@ -713,6 +763,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
> __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val);
>
> memcg_rstat_updated(memcg, val);
> + memcg_stats_unlock();
> }
>
> /**
> @@ -795,8 +846,10 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx,
> if (mem_cgroup_disabled())
> return;
>
> + memcg_stats_lock();
> __this_cpu_add(memcg->vmstats_percpu->events[idx], count);
> memcg_rstat_updated(memcg, count);
> + memcg_stats_unlock();
> }
>
> static unsigned long memcg_events(struct mem_cgroup *memcg, int event)
> @@ -7140,8 +7193,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
> * important here to have the interrupts disabled because it is the
> * only synchronisation we have for updating the per-CPU variables.
> */
> - VM_BUG_ON(!irqs_disabled());
> + memcg_stats_lock();
> mem_cgroup_charge_statistics(memcg, -nr_entries);
> + memcg_stats_unlock();
> memcg_check_events(memcg, page_to_nid(page));
>
> css_put(&memcg->css);
> --
> 2.35.1
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2022-02-28 8:05 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-26 20:41 [PATCH v5 0/6] mm/memcg: Address PREEMPT_RT problems instead of disabling it Sebastian Andrzej Siewior
2022-02-26 20:41 ` [PATCH v5 1/6] mm/memcg: Revert ("mm/memcg: optimize user context object stock access") Sebastian Andrzej Siewior
2022-02-26 20:41 ` [PATCH v5 2/6] mm/memcg: Disable threshold event handlers on PREEMPT_RT Sebastian Andrzej Siewior
2023-03-01 18:23 ` Valentin Schneider
2023-03-02 7:45 ` Michal Hocko
2023-03-02 10:18 ` Valentin Schneider
2023-03-02 11:24 ` Michal Hocko
2023-03-02 12:30 ` Valentin Schneider
2023-03-02 12:56 ` Michal Hocko
2023-03-02 14:34 ` Valentin Schneider
2023-03-02 19:52 ` Valentin Schneider
2022-02-26 20:41 ` [PATCH v5 3/6] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT where needed Sebastian Andrzej Siewior
2022-02-28 8:05 ` Michal Hocko [this message]
2022-02-28 11:08 ` Sebastian Andrzej Siewior
2022-02-28 11:23 ` Michal Hocko
2022-02-28 12:35 ` Sebastian Andrzej Siewior
2022-02-26 20:41 ` [PATCH v5 4/6] mm/memcg: Opencode the inner part of obj_cgroup_uncharge_pages() in drain_obj_stock() Sebastian Andrzej Siewior
2022-02-26 20:41 ` [PATCH v5 5/6] mm/memcg: Protect memcg_stock with a local_lock_t Sebastian Andrzej Siewior
2022-02-28 8:06 ` Michal Hocko
2022-02-26 20:41 ` [PATCH v5 6/6] mm/memcg: Disable migration instead of preemption in drain_all_stock() Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YhyCWQYL8vxRSLrd@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mkoutny@suse.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).