From: tip-bot for Rik van Riel <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: songliubraving@fb.com, mingo@kernel.org, riel@surriel.com,
peterz@infradead.org, linux-kernel@vger.kernel.org,
tglx@linutronix.de, hpa@zytor.com
Subject: [tip:x86/mm] x86/mm/tlb: Make lazy TLB mode lazier
Date: Tue, 9 Oct 2018 08:01:29 -0700 [thread overview]
Message-ID: <tip-145f573b89a62bf53cfc0144fa9b1c56b0f70b45@git.kernel.org> (raw)
In-Reply-To: <20180926035844.1420-8-riel@surriel.com>
Commit-ID: 145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Gitweb: https://git.kernel.org/tip/145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Author: Rik van Riel <riel@surriel.com>
AuthorDate: Tue, 25 Sep 2018 23:58:44 -0400
Committer: Peter Zijlstra <peterz@infradead.org>
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Make lazy TLB mode lazier
Lazy TLB mode can result in an idle CPU being woken up by a TLB flush,
when all it really needs to do is reload %CR3 at the next context switch,
assuming no page table pages got freed.
Memory ordering is used to prevent race conditions between switch_mm_irqs_off,
which checks whether .tlb_gen changed, and the TLB invalidation code, which
increments .tlb_gen whenever page table entries get invalidated.
The atomic increment in inc_mm_tlb_gen is its own barrier; the context
switch code adds an explicit barrier between reading tlbstate.is_lazy and
next->context.tlb_gen.
CPUs in lazy TLB mode remain part of the mm_cpumask(mm), both because
that allows TLB flush IPIs to be sent at page table freeing time, and
because the cache line bouncing on the mm_cpumask(mm) was responsible
for about half the CPU use in switch_mm_irqs_off().
We can change native_flush_tlb_others() without touching other
(paravirt) implementations of flush_tlb_others() because we'll be
flushing less. The existing implementations flush more and are
therefore still correct.
Cc: npiggin@gmail.com
Cc: mingo@kernel.org
Cc: will.deacon@arm.com
Cc: kernel-team@fb.com
Cc: luto@kernel.org
Cc: hpa@zytor.com
Tested-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180926035844.1420-8-riel@surriel.com
---
arch/x86/mm/tlb.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 58 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 92e46f4c058c..7d68489cfdb1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -185,6 +185,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
{
struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+ bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy);
unsigned cpu = smp_processor_id();
u64 next_tlb_gen;
bool need_flush;
@@ -242,17 +243,40 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
next->context.ctx_id);
/*
- * We don't currently support having a real mm loaded without
- * our cpu set in mm_cpumask(). We have all the bookkeeping
- * in place to figure out whether we would need to flush
- * if our cpu were cleared in mm_cpumask(), but we don't
- * currently use it.
+ * Even in lazy TLB mode, the CPU should stay set in the
+ * mm_cpumask. The TLB shootdown code can figure out from
+ * from cpu_tlbstate.is_lazy whether or not to send an IPI.
*/
if (WARN_ON_ONCE(real_prev != &init_mm &&
!cpumask_test_cpu(cpu, mm_cpumask(next))))
cpumask_set_cpu(cpu, mm_cpumask(next));
- return;
+ /*
+ * If the CPU is not in lazy TLB mode, we are just switching
+ * from one thread in a process to another thread in the same
+ * process. No TLB flush required.
+ */
+ if (!was_lazy)
+ return;
+
+ /*
+ * Read the tlb_gen to check whether a flush is needed.
+ * If the TLB is up to date, just use it.
+ * The barrier synchronizes with the tlb_gen increment in
+ * the TLB shootdown code.
+ */
+ smp_mb();
+ next_tlb_gen = atomic64_read(&next->context.tlb_gen);
+ if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
+ next_tlb_gen)
+ return;
+
+ /*
+ * TLB contents went out of date while we were in lazy
+ * mode. Fall through to the TLB switching code below.
+ */
+ new_asid = prev_asid;
+ need_flush = true;
} else {
u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
@@ -346,8 +370,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
this_cpu_write(cpu_tlbstate.loaded_mm, next);
this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
- load_mm_cr4(next);
- switch_ldt(real_prev, next);
+ if (next != real_prev) {
+ load_mm_cr4(next);
+ switch_ldt(real_prev, next);
+ }
}
/*
@@ -455,6 +481,9 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
* paging-structure cache to avoid speculatively reading
* garbage into our TLB. Since switching to init_mm is barely
* slower than a minimal flush, just switch to init_mm.
+ *
+ * This should be rare, with native_flush_tlb_others skipping
+ * IPIs to lazy TLB mode CPUs.
*/
switch_mm_irqs_off(NULL, &init_mm, NULL);
return;
@@ -557,6 +586,11 @@ static void flush_tlb_func_remote(void *info)
flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
}
+static bool tlb_is_not_lazy(int cpu, void *data)
+{
+ return !per_cpu(cpu_tlbstate.is_lazy, cpu);
+}
+
void native_flush_tlb_others(const struct cpumask *cpumask,
const struct flush_tlb_info *info)
{
@@ -592,8 +626,23 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
(void *)info, 1);
return;
}
- smp_call_function_many(cpumask, flush_tlb_func_remote,
+
+ /*
+ * If no page tables were freed, we can skip sending IPIs to
+ * CPUs in lazy TLB mode. They will flush the CPU themselves
+ * at the next context switch.
+ *
+ * However, if page tables are getting freed, we need to send the
+ * IPI everywhere, to prevent CPUs in lazy TLB mode from tripping
+ * up on the new contents of what used to be page tables, while
+ * doing a speculative memory access.
+ */
+ if (info->freed_tables)
+ smp_call_function_many(cpumask, flush_tlb_func_remote,
(void *)info, 1);
+ else
+ on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
+ (void *)info, 1, GFP_ATOMIC, cpumask);
}
/*
next prev parent reply other threads:[~2018-10-09 15:01 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-26 3:58 [PATCH v2 0/7] x86/mm/tlb: make lazy TLB mode even lazier Rik van Riel
2018-09-26 3:58 ` [PATCH 1/7] x86/mm/tlb: Always use lazy TLB mode Rik van Riel
2018-10-01 15:58 ` Peter Zijlstra
2018-10-01 16:07 ` Rik van Riel
2018-09-26 3:58 ` [PATCH 2/7] x86/mm/tlb: Restructure switch_mm_irqs_off() Rik van Riel
2018-10-02 7:32 ` Peter Zijlstra
2018-09-26 3:58 ` [PATCH 3/7] smp: use __cpumask_set_cpu in on_each_cpu_cond Rik van Riel
2018-10-09 14:59 ` [tip:x86/mm] " tip-bot for Rik van Riel
2018-09-26 3:58 ` [PATCH 4/7] smp,cpumask: introduce on_each_cpu_cond_mask Rik van Riel
2018-10-09 14:59 ` [tip:x86/mm] " tip-bot for Rik van Riel
2018-09-26 3:58 ` [PATCH 5/7] Add freed_tables argument to flush_tlb_mm_range Rik van Riel
2018-10-09 15:00 ` [tip:x86/mm] x86/mm/tlb: " tip-bot for Rik van Riel
2018-09-26 3:58 ` [PATCH 6/7] Add freed_tables element to flush_tlb_info Rik van Riel
2018-10-09 15:00 ` [tip:x86/mm] x86/mm/tlb: " tip-bot for Rik van Riel
2018-09-26 3:58 ` [PATCH 7/7] x86/mm/tlb: Make lazy TLB mode lazier Rik van Riel
2018-10-01 16:07 ` Peter Zijlstra
2018-10-09 15:01 ` tip-bot for Rik van Riel [this message]
2018-10-01 16:09 ` [PATCH v2 0/7] x86/mm/tlb: make lazy TLB mode even lazier Peter Zijlstra
2018-10-02 7:44 ` Peter Zijlstra
2018-10-02 13:41 ` Rik van Riel
-- strict thread matches above, loose matches on Subject: below --
2018-07-16 19:03 [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-07-17 9:35 ` [tip:x86/mm] x86/mm/tlb: Make " tip-bot for Rik van Riel
2018-07-17 11:33 ` Peter Zijlstra
2018-07-18 15:33 ` Rik van Riel
2018-07-18 16:00 ` Peter Zijlstra
[not found] ` <081E558D-DB34-4A18-A35C-896BC47F6EBA@surriel.com>
2018-07-18 18:23 ` Peter Zijlstra
2018-07-18 18:51 ` Rik van Riel
2018-07-19 9:13 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tip-145f573b89a62bf53cfc0144fa9b1c56b0f70b45@git.kernel.org \
--to=tipbot@zytor.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tip-commits@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=songliubraving@fb.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).