From: Rik van Riel <riel@surriel.com>
To: linux-kernel@vger.kernel.org
Cc: dave.hansen@linux.intel.com, luto@kernel.org,
peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, x86@kernel.org, kernel-team@meta.com,
hpa@zytor.com, Rik van Riel <riel@surriel.com>,
Dave Hansen <dave.hansen@intel.com>
Subject: [PATCH 2/3] x86,tlb: add tracepoint for TLB flush IPI to stale CPU
Date: Fri, 8 Nov 2024 19:27:49 -0500 [thread overview]
Message-ID: <20241109003727.3958374-3-riel@surriel.com> (raw)
In-Reply-To: <20241109003727.3958374-1-riel@surriel.com>
Add a tracepoint when we send a TLB flush IPI to a CPU that used
to be in the mm_cpumask, but isn't any more.
This can be used to evaluate whether there any workloads where
we end up in this path problematically often. Hopefully they
don't exist.
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/mm/tlb.c | 1 +
include/linux/mm_types.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index f19f6378cabf..9d0d34576928 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -760,6 +760,7 @@ static void flush_tlb_func(void *info)
/* Can only happen on remote CPUs */
if (f->mm && f->mm != loaded_mm) {
cpumask_clear_cpu(raw_smp_processor_id(), mm_cpumask(f->mm));
+ trace_tlb_flush(TLB_REMOTE_WRONG_CPU, 0);
return;
}
}
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6e3bdf8e38bc..6b6f05404304 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1335,6 +1335,7 @@ enum tlb_flush_reason {
TLB_LOCAL_SHOOTDOWN,
TLB_LOCAL_MM_SHOOTDOWN,
TLB_REMOTE_SEND_IPI,
+ TLB_REMOTE_WRONG_CPU,
NR_TLB_FLUSH_REASONS,
};
--
2.45.2
next prev parent reply other threads:[~2024-11-09 0:39 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-09 0:27 [PATCh 0/3] x86,tlb: context switch optimizations Rik van Riel
2024-11-09 0:27 ` [PATCH 1/3] x86,tlb: update mm_cpumask lazily Rik van Riel
2024-11-13 2:59 ` [tip: x86/mm] x86/mm/tlb: Update " tip-bot2 for Rik van Riel
2024-11-09 0:27 ` Rik van Riel [this message]
2024-11-13 2:59 ` [tip: x86/mm] x86/mm/tlb: Add tracepoint for TLB flush IPI to stale CPU tip-bot2 for Rik van Riel
2024-11-09 0:27 ` [PATCH 3/3] x86,tlb: put cpumask_test_cpu in prev == next under CONFIG_DEBUG_VM Rik van Riel
2024-11-13 2:59 ` [tip: x86/mm] x86/mm/tlb: Put cpumask_test_cpu() check in switch_mm_irqs_off() " tip-bot2 for Rik van Riel
2024-11-13 9:55 ` [PATCh 0/3] x86,tlb: context switch optimizations Borislav Petkov
2024-11-13 10:00 ` Ingo Molnar
2024-11-13 14:38 ` Rik van Riel
2024-11-14 11:33 ` Peter Zijlstra
2024-11-13 14:55 ` Rik van Riel
2024-11-14 9:52 ` Ingo Molnar
2024-11-14 11:36 ` Peter Zijlstra
2024-11-14 14:27 ` Rik van Riel
2024-11-14 14:40 ` Peter Zijlstra
2024-11-14 11:36 ` Peter Zijlstra
2024-11-14 11:43 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241109003727.3958374-3-riel@surriel.com \
--to=riel@surriel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox