linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults
@ 2025-04-01 14:35 Mateusz Guzik
  2025-04-01 18:44 ` [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock " tip-bot2 for Mateusz Guzik
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Mateusz Guzik @ 2025-04-01 14:35 UTC (permalink / raw)
  To: dave.hansen; +Cc: linux-mm, x86, linux-kernel, Mateusz Guzik

The prefetchw dates back decades and the fundamental notion of doing
something like this on a lock is shady.

Moreover, for few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.

As such just remove it.

I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
---
 arch/x86/mm/fault.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294142c8..697432f63c59 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
 #include <linux/mmiotrace.h>		/* kmmio_handler, ...		*/
 #include <linux/perf_event.h>		/* perf_sw_event		*/
 #include <linux/hugetlb.h>		/* hstate_index_to_shift	*/
-#include <linux/prefetch.h>		/* prefetchw			*/
 #include <linux/context_tracking.h>	/* exception_enter(), ...	*/
 #include <linux/uaccess.h>		/* faulthandler_disabled()	*/
 #include <linux/efi.h>			/* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
 
 	address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
 
-	prefetchw(&current->mm->mmap_lock);
-
 	/*
 	 * KVM uses #PF vector to deliver 'page not present' events to guests
 	 * (asynchronous page fault mechanism). The event happens when a
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock on page faults
  2025-04-01 14:35 [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults Mateusz Guzik
@ 2025-04-01 18:44 ` tip-bot2 for Mateusz Guzik
  2025-04-01 20:53 ` tip-bot2 for Mateusz Guzik
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Mateusz Guzik @ 2025-04-01 18:44 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Mateusz Guzik, Ingo Molnar, Andy Lutomirski, Peter Zijlstra,
	Rik van Riel, H. Peter Anvin, Linus Torvalds, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     0f0a9bf602449c0114117a72eab4329c9a22176d
Gitweb:        https://git.kernel.org/tip/0f0a9bf602449c0114117a72eab4329c9a22176d
Author:        Mateusz Guzik <mjguzik@gmail.com>
AuthorDate:    Tue, 01 Apr 2025 16:35:20 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 20:26:35 +02:00

x86/mm: Stop prefetching current->mm->mmap_lock on page faults

The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.

Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.

As such just remove it.

I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
 arch/x86/mm/fault.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
 #include <linux/mmiotrace.h>		/* kmmio_handler, ...		*/
 #include <linux/perf_event.h>		/* perf_sw_event		*/
 #include <linux/hugetlb.h>		/* hstate_index_to_shift	*/
-#include <linux/prefetch.h>		/* prefetchw			*/
 #include <linux/context_tracking.h>	/* exception_enter(), ...	*/
 #include <linux/uaccess.h>		/* faulthandler_disabled()	*/
 #include <linux/efi.h>			/* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
 
 	address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
 
-	prefetchw(&current->mm->mmap_lock);
-
 	/*
 	 * KVM uses #PF vector to deliver 'page not present' events to guests
 	 * (asynchronous page fault mechanism). The event happens when a

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock on page faults
  2025-04-01 14:35 [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults Mateusz Guzik
  2025-04-01 18:44 ` [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock " tip-bot2 for Mateusz Guzik
@ 2025-04-01 20:53 ` tip-bot2 for Mateusz Guzik
  2025-04-01 21:03 ` tip-bot2 for Mateusz Guzik
  2025-04-09  9:25 ` [PATCH] x86/mm: stop prefetching the mmap semaphore " David Hildenbrand
  3 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Mateusz Guzik @ 2025-04-01 20:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Mateusz Guzik, Ingo Molnar, Andy Lutomirski, Peter Zijlstra,
	Rik van Riel, H. Peter Anvin, Linus Torvalds, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     0b2695d58e800ad53e718d003310829db492a39c
Gitweb:        https://git.kernel.org/tip/0b2695d58e800ad53e718d003310829db492a39c
Author:        Mateusz Guzik <mjguzik@gmail.com>
AuthorDate:    Tue, 01 Apr 2025 16:35:20 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 22:47:02 +02:00

x86/mm: Stop prefetching current->mm->mmap_lock on page faults

The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.

Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.

As such just remove it.

I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
 arch/x86/mm/fault.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
 #include <linux/mmiotrace.h>		/* kmmio_handler, ...		*/
 #include <linux/perf_event.h>		/* perf_sw_event		*/
 #include <linux/hugetlb.h>		/* hstate_index_to_shift	*/
-#include <linux/prefetch.h>		/* prefetchw			*/
 #include <linux/context_tracking.h>	/* exception_enter(), ...	*/
 #include <linux/uaccess.h>		/* faulthandler_disabled()	*/
 #include <linux/efi.h>			/* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
 
 	address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
 
-	prefetchw(&current->mm->mmap_lock);
-
 	/*
 	 * KVM uses #PF vector to deliver 'page not present' events to guests
 	 * (asynchronous page fault mechanism). The event happens when a

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock on page faults
  2025-04-01 14:35 [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults Mateusz Guzik
  2025-04-01 18:44 ` [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock " tip-bot2 for Mateusz Guzik
  2025-04-01 20:53 ` tip-bot2 for Mateusz Guzik
@ 2025-04-01 21:03 ` tip-bot2 for Mateusz Guzik
  2025-04-09  9:25 ` [PATCH] x86/mm: stop prefetching the mmap semaphore " David Hildenbrand
  3 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Mateusz Guzik @ 2025-04-01 21:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Mateusz Guzik, Ingo Molnar, Andy Lutomirski, Peter Zijlstra,
	Rik van Riel, H. Peter Anvin, Linus Torvalds, x86, linux-kernel

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     1701771d3069fbee154ca48e882e227fdcfbb583
Gitweb:        https://git.kernel.org/tip/1701771d3069fbee154ca48e882e227fdcfbb583
Author:        Mateusz Guzik <mjguzik@gmail.com>
AuthorDate:    Tue, 01 Apr 2025 16:35:20 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 22:48:56 +02:00

x86/mm: Stop prefetching current->mm->mmap_lock on page faults

The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.

Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.

As such just remove it.

I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
 arch/x86/mm/fault.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
 #include <linux/mmiotrace.h>		/* kmmio_handler, ...		*/
 #include <linux/perf_event.h>		/* perf_sw_event		*/
 #include <linux/hugetlb.h>		/* hstate_index_to_shift	*/
-#include <linux/prefetch.h>		/* prefetchw			*/
 #include <linux/context_tracking.h>	/* exception_enter(), ...	*/
 #include <linux/uaccess.h>		/* faulthandler_disabled()	*/
 #include <linux/efi.h>			/* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
 
 	address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
 
-	prefetchw(&current->mm->mmap_lock);
-
 	/*
 	 * KVM uses #PF vector to deliver 'page not present' events to guests
 	 * (asynchronous page fault mechanism). The event happens when a

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults
  2025-04-01 14:35 [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults Mateusz Guzik
                   ` (2 preceding siblings ...)
  2025-04-01 21:03 ` tip-bot2 for Mateusz Guzik
@ 2025-04-09  9:25 ` David Hildenbrand
  3 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2025-04-09  9:25 UTC (permalink / raw)
  To: Mateusz Guzik, dave.hansen; +Cc: linux-mm, x86, linux-kernel

On 01.04.25 16:35, Mateusz Guzik wrote:
> The prefetchw dates back decades and the fundamental notion of doing
> something like this on a lock is shady.
> 
> Moreover, for few years now in the fast path faults are handled with RCU
> + per-vma locking, hopefully not even looking at the lock to begin with.
> 
> As such just remove it.
> 
> I did not see a point benchmarking this. Given that it is not expected
> to be looked at by default justifies not doing the prefetch.
> 
> Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
> ---
>   arch/x86/mm/fault.c | 3 ---
>   1 file changed, 3 deletions(-)
> 
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 296d294142c8..697432f63c59 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -13,7 +13,6 @@
>   #include <linux/mmiotrace.h>		/* kmmio_handler, ...		*/
>   #include <linux/perf_event.h>		/* perf_sw_event		*/
>   #include <linux/hugetlb.h>		/* hstate_index_to_shift	*/
> -#include <linux/prefetch.h>		/* prefetchw			*/
>   #include <linux/context_tracking.h>	/* exception_enter(), ...	*/
>   #include <linux/uaccess.h>		/* faulthandler_disabled()	*/
>   #include <linux/efi.h>			/* efi_crash_gracefully_on_page_fault()*/
> @@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
>   
>   	address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
>   
> -	prefetchw(&current->mm->mmap_lock);
> -
>   	/*
>   	 * KVM uses #PF vector to deliver 'page not present' events to guests
>   	 * (asynchronous page fault mechanism). The event happens when a

I'm sure if this would have any value, we'd get notified about it :)

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-04-09  9:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-01 14:35 [PATCH] x86/mm: stop prefetching the mmap semaphore on page faults Mateusz Guzik
2025-04-01 18:44 ` [tip: x86/mm] x86/mm: Stop prefetching current->mm->mmap_lock " tip-bot2 for Mateusz Guzik
2025-04-01 20:53 ` tip-bot2 for Mateusz Guzik
2025-04-01 21:03 ` tip-bot2 for Mateusz Guzik
2025-04-09  9:25 ` [PATCH] x86/mm: stop prefetching the mmap semaphore " David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).