From: Frederic Weisbecker <frederic@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Yair Podemsky <ypodemsk@redhat.com>,
linux@armlinux.org.uk, mpe@ellerman.id.au, npiggin@gmail.com,
christophe.leroy@csgroup.eu, hca@linux.ibm.com,
gor@linux.ibm.com, agordeev@linux.ibm.com,
borntraeger@linux.ibm.com, svens@linux.ibm.com,
davem@davemloft.net, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
hpa@zytor.com, will@kernel.org, aneesh.kumar@linux.ibm.com,
akpm@linux-foundation.org, arnd@arndb.de, keescook@chromium.org,
paulmck@kernel.org, jpoimboe@kernel.org, samitolvanen@google.com,
ardb@kernel.org, juerg.haefliger@canonical.com,
rmk+kernel@armlinux.org.uk, geert+renesas@glider.be,
tony@atomide.com, linus.walleij@linaro.org,
sebastian.reichel@collabora.com, nick.hawkins@hpe.com,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
sparclinux@vger.kernel.org, linux-arch@vger.kernel.org,
linux-mm@kvack.org, mtosatti@redhat.com, vschneid@redhat.com,
dhildenb@redhat.com, alougovs@redhat.com
Subject: Re: [PATCH 3/3] mm/mmu_gather: send tlb_remove_table_smp_sync IPI only to CPUs in kernel mode
Date: Wed, 5 Apr 2023 14:05:06 +0200 [thread overview]
Message-ID: <ZC1j8ivE/kK7+Gd5@lothringen> (raw)
In-Reply-To: <20230405114148.GA351571@hirez.programming.kicks-ass.net>
On Wed, Apr 05, 2023 at 01:41:48PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 05, 2023 at 01:10:07PM +0200, Frederic Weisbecker wrote:
> > On Wed, Apr 05, 2023 at 12:44:04PM +0200, Frederic Weisbecker wrote:
> > > On Tue, Apr 04, 2023 at 04:42:24PM +0300, Yair Podemsky wrote:
> > > > + int state = atomic_read(&ct->state);
> > > > + /* will return true only for cpus in kernel space */
> > > > + return state & CT_STATE_MASK == CONTEXT_KERNEL;
> > > > +}
> > >
> > > Also note that this doesn't stricly prevent userspace from being interrupted.
> > > You may well observe the CPU in kernel but it may receive the IPI later after
> > > switching to userspace.
> > >
> > > We could arrange for avoiding that with marking ct->state with a pending work bit
> > > to flush upon user entry/exit but that's a bit more overhead so I first need to
> > > know about your expectations here, ie: can you tolerate such an occasional
> > > interruption or not?
> >
> > Bah, actually what can we do to prevent from that racy IPI? Not much I fear...
>
> Yeah, so I don't think that's actually a problem. The premise is that
> *IFF* NOHZ_FULL stays in userspace, then it will never observe the IPI.
>
> If it violates this by doing syscalls or other kernel entries; it gets
> to keep the pieces.
Ok so how about the following (only build tested)?
Two things:
1) It has the advantage to check context tracking _after_ the llist_add(), so
it really can't be misused ordering-wise.
2) The IPI callback is always enqueued and then executed upon return
from userland. The ordering makes sure it will either IPI or execute
upon return to userspace.
diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
index 4a4d56f77180..dc4b56da1747 100644
--- a/include/linux/context_tracking_state.h
+++ b/include/linux/context_tracking_state.h
@@ -137,10 +137,23 @@ static __always_inline int ct_state(void)
return ret;
}
+static __always_inline int ct_state_cpu(int cpu)
+{
+ struct context_tracking *ct;
+
+ if (!context_tracking_enabled())
+ return CONTEXT_DISABLED;
+
+ ct = per_cpu_ptr(&context_tracking, cpu);
+
+ return atomic_read(&ct->state) & CT_STATE_MASK;
+}
+
#else
static __always_inline bool context_tracking_enabled(void) { return false; }
static __always_inline bool context_tracking_enabled_cpu(int cpu) { return false; }
static __always_inline bool context_tracking_enabled_this_cpu(void) { return false; }
+static inline int ct_state_cpu(int cpu) { return CONTEXT_DISABLED; }
#endif /* CONFIG_CONTEXT_TRACKING_USER */
#endif
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 846add8394c4..cdc7e8a59acc 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -10,6 +10,7 @@
#include <linux/audit.h>
#include <linux/tick.h>
+#include "../kernel/sched/smp.h"
#include "common.h"
#define CREATE_TRACE_POINTS
@@ -27,6 +28,10 @@ static __always_inline void __enter_from_user_mode(struct pt_regs *regs)
instrumentation_begin();
kmsan_unpoison_entry_regs(regs);
trace_hardirqs_off_finish();
+
+ /* Flush delayed IPI queue on nohz_full */
+ if (context_tracking_enabled_this_cpu())
+ flush_smp_call_function_queue();
instrumentation_end();
}
diff --git a/kernel/smp.c b/kernel/smp.c
index 06a413987a14..14b25d25ef3a 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -878,6 +878,8 @@ EXPORT_SYMBOL_GPL(smp_call_function_any);
*/
#define SCF_WAIT (1U << 0)
#define SCF_RUN_LOCAL (1U << 1)
+#define SCF_NO_USER (1U << 2)
+
static void smp_call_function_many_cond(const struct cpumask *mask,
smp_call_func_t func, void *info,
@@ -946,10 +948,13 @@ static void smp_call_function_many_cond(const struct cpumask *mask,
#endif
cfd_seq_store(pcpu->seq_queue, this_cpu, cpu, CFD_SEQ_QUEUE);
if (llist_add(&csd->node.llist, &per_cpu(call_single_queue, cpu))) {
- __cpumask_set_cpu(cpu, cfd->cpumask_ipi);
- nr_cpus++;
- last_cpu = cpu;
-
+ if (!(scf_flags & SCF_NO_USER) ||
+ !IS_ENABLED(CONFIG_GENERIC_ENTRY) ||
+ ct_state_cpu(cpu) != CONTEXT_USER) {
+ __cpumask_set_cpu(cpu, cfd->cpumask_ipi);
+ nr_cpus++;
+ last_cpu = cpu;
+ }
cfd_seq_store(pcpu->seq_ipi, this_cpu, cpu, CFD_SEQ_IPI);
} else {
cfd_seq_store(pcpu->seq_noipi, this_cpu, cpu, CFD_SEQ_NOIPI);
@@ -1121,6 +1126,24 @@ void __init smp_init(void)
smp_cpus_done(setup_max_cpus);
}
+static void __on_each_cpu_cond_mask(smp_cond_func_t cond_func,
+ smp_call_func_t func,
+ void *info, bool wait, bool nouser,
+ const struct cpumask *mask)
+{
+ unsigned int scf_flags = SCF_RUN_LOCAL;
+
+ if (wait)
+ scf_flags |= SCF_WAIT;
+
+ if (nouser)
+ scf_flags |= SCF_NO_USER;
+
+ preempt_disable();
+ smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
+ preempt_enable();
+}
+
/*
* on_each_cpu_cond(): Call a function on each processor for which
* the supplied function cond_func returns true, optionally waiting
@@ -1146,17 +1169,18 @@ void __init smp_init(void)
void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
void *info, bool wait, const struct cpumask *mask)
{
- unsigned int scf_flags = SCF_RUN_LOCAL;
-
- if (wait)
- scf_flags |= SCF_WAIT;
-
- preempt_disable();
- smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
- preempt_enable();
+ __on_each_cpu_cond_mask(cond_func, func, info, wait, false, mask);
}
EXPORT_SYMBOL(on_each_cpu_cond_mask);
+void on_each_cpu_cond_nouser_mask(smp_cond_func_t cond_func,
+ smp_call_func_t func,
+ void *info, bool wait,
+ const struct cpumask *mask)
+{
+ __on_each_cpu_cond_mask(cond_func, func, info, wait, true, mask);
+}
+
static void do_nothing(void *unused)
{
}
next prev parent reply other threads:[~2023-04-05 12:05 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-04 13:42 [PATCH 0/3] send tlb_remove_table_smp_sync IPI only to necessary CPUs Yair Podemsky
2023-04-04 13:42 ` [PATCH 1/3] arch: Introduce ARCH_HAS_CPUMASK_BITS Yair Podemsky
2023-04-04 13:47 ` David Hildenbrand
2023-04-04 13:42 ` [PATCH 2/3] mm/mmu_gather: send tlb_remove_table_smp_sync IPI only to MM CPUs Yair Podemsky
2023-04-04 14:57 ` Peter Zijlstra
2023-04-04 13:42 ` [PATCH 3/3] mm/mmu_gather: send tlb_remove_table_smp_sync IPI only to CPUs in kernel mode Yair Podemsky
2023-04-04 14:03 ` David Hildenbrand
2023-04-04 15:12 ` Peter Zijlstra
2023-04-04 16:00 ` Peter Zijlstra
2023-04-05 10:43 ` Frederic Weisbecker
2023-04-05 11:10 ` Frederic Weisbecker
2023-04-05 11:41 ` Peter Zijlstra
2023-04-05 12:00 ` David Hildenbrand
2023-04-05 12:05 ` Frederic Weisbecker [this message]
2023-04-05 12:31 ` Frederic Weisbecker
2023-04-05 12:45 ` Valentin Schneider
2023-04-06 13:38 ` Peter Zijlstra
2023-04-06 14:11 ` Valentin Schneider
2023-04-06 14:39 ` Peter Zijlstra
2023-04-05 19:45 ` Marcelo Tosatti
2023-04-05 19:52 ` Peter Zijlstra
2023-04-06 12:38 ` Marcelo Tosatti
2023-04-06 13:29 ` Peter Zijlstra
2023-04-06 14:04 ` Peter Zijlstra
2023-04-06 14:42 ` David Hildenbrand
2023-04-06 15:06 ` Peter Zijlstra
2023-04-06 15:02 ` Peter Zijlstra
2023-04-06 15:51 ` David Hildenbrand
2023-04-06 18:27 ` Peter Zijlstra
2023-04-19 11:30 ` David Hildenbrand
2023-04-19 11:39 ` Marcelo Tosatti
2023-04-05 19:43 ` Marcelo Tosatti
2023-04-05 19:54 ` Peter Zijlstra
2023-04-06 12:49 ` Marcelo Tosatti
2023-04-06 13:32 ` Peter Zijlstra
2023-04-19 11:01 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZC1j8ivE/kK7+Gd5@lothringen \
--to=frederic@kernel.org \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=alougovs@redhat.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=bp@alien8.de \
--cc=christophe.leroy@csgroup.eu \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dhildenb@redhat.com \
--cc=geert+renesas@glider.be \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=hpa@zytor.com \
--cc=jpoimboe@kernel.org \
--cc=juerg.haefliger@canonical.com \
--cc=keescook@chromium.org \
--cc=linus.walleij@linaro.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=mtosatti@redhat.com \
--cc=nick.hawkins@hpe.com \
--cc=npiggin@gmail.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rmk+kernel@armlinux.org.uk \
--cc=samitolvanen@google.com \
--cc=sebastian.reichel@collabora.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=tony@atomide.com \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=ypodemsk@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).