public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: peterz@infradead.org, willy@infradead.org,
	liam.howlett@oracle.com,  lorenzo.stoakes@oracle.com,
	mhocko@suse.com, vbabka@suse.cz,  hannes@cmpxchg.org,
	mjguzik@gmail.com, oliver.sang@intel.com,
	 mgorman@techsingularity.net, david@redhat.com,
	peterx@redhat.com,  oleg@redhat.com, dave@stgolabs.net,
	paulmck@kernel.org, brauner@kernel.org,  dhowells@redhat.com,
	hdanton@sina.com, hughd@google.com,  lokeshgidra@google.com,
	minchan@google.com, jannh@google.com,  shakeel.butt@linux.dev,
	souravpanda@google.com, pasha.tatashin@soleen.com,
	 klarasmodin@gmail.com, corbet@lwn.net,
	linux-doc@vger.kernel.org,  linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, kernel-team@android.com,
	 surenb@google.com
Subject: [PATCH v7 10/17] mm: uninline the main body of vma_start_write()
Date: Thu, 26 Dec 2024 09:07:02 -0800	[thread overview]
Message-ID: <20241226170710.1159679-11-surenb@google.com> (raw)
In-Reply-To: <20241226170710.1159679-1-surenb@google.com>

vma_start_write() is used in many places and will grow in size very soon.
It is not used in performance critical paths and uninlining it should
limit the future code size growth.
No functional changes.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/mm.h | 12 +++---------
 mm/memory.c        | 14 ++++++++++++++
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ab27de9729d8..ea4c4228b125 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l
 	return (vma->vm_lock_seq == *mm_lock_seq);
 }
 
+void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq);
+
 /*
  * Begin writing to a VMA.
  * Exclude concurrent readers under the per-VMA lock until the currently
@@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_struct *vma)
 	if (__is_vma_write_locked(vma, &mm_lock_seq))
 		return;
 
-	down_write(&vma->vm_lock.lock);
-	/*
-	 * We should use WRITE_ONCE() here because we can have concurrent reads
-	 * from the early lockless pessimistic check in vma_start_read().
-	 * We don't really care about the correctness of that early check, but
-	 * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
-	 */
-	WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
-	up_write(&vma->vm_lock.lock);
+	__vma_start_write(vma, mm_lock_seq);
 }
 
 static inline void vma_assert_write_locked(struct vm_area_struct *vma)
diff --git a/mm/memory.c b/mm/memory.c
index d0dee2282325..236fdecd44d6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6328,6 +6328,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
 #endif
 
 #ifdef CONFIG_PER_VMA_LOCK
+void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
+{
+	down_write(&vma->vm_lock.lock);
+	/*
+	 * We should use WRITE_ONCE() here because we can have concurrent reads
+	 * from the early lockless pessimistic check in vma_start_read().
+	 * We don't really care about the correctness of that early check, but
+	 * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
+	 */
+	WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
+	up_write(&vma->vm_lock.lock);
+}
+EXPORT_SYMBOL_GPL(__vma_start_write);
+
 /*
  * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be
  * stable and not isolated. If the VMA is not found or is being modified the
-- 
2.47.1.613.gc27f4b7a9f-goog



  parent reply	other threads:[~2024-12-26 17:07 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-26 17:06 [PATCH v7 00/17] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 01/17] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-01-08 14:59   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 02/17] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-01-08 14:59   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 03/17] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2025-01-08 15:01   ` Liam R. Howlett
2024-12-26 17:06 ` [PATCH v7 04/17] mm: modify vma_iter_store{_gfp} to indicate if it's storing a new vma Suren Baghdasaryan
2025-01-07 16:48   ` Vlastimil Babka
2025-01-07 16:49   ` Liam R. Howlett
2025-01-07 17:12     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 05/17] mm: mark vmas detached upon exit Suren Baghdasaryan
2025-01-07 17:08   ` Vlastimil Babka
2025-01-07 17:13     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 06/17] mm/nommu: fix the last places where vma is not locked before being attached Suren Baghdasaryan
2025-01-07 17:51   ` Liam R. Howlett
2025-01-07 18:05     ` Suren Baghdasaryan
2024-12-26 17:06 ` [PATCH v7 07/17] types: move struct rcuwait into types.h Suren Baghdasaryan
2024-12-27 18:35   ` Davidlohr Bueso
2025-01-08 15:02   ` Liam R. Howlett
2024-12-26 17:07 ` [PATCH v7 08/17] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2025-01-07 17:28   ` Vlastimil Babka
2025-01-07 17:31     ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 09/17] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2025-01-07 17:30   ` Vlastimil Babka
2024-12-26 17:07 ` Suren Baghdasaryan [this message]
2025-01-07 17:35   ` [PATCH v7 10/17] mm: uninline the main body of vma_start_write() Vlastimil Babka
2025-01-07 17:45     ` Suren Baghdasaryan
2025-01-07 18:51       ` Suren Baghdasaryan
2025-04-08  4:39     ` Eric Naim
2025-04-08  6:01       ` Christoph Hellwig
2025-04-08  6:25         ` Lorenzo Stoakes
2025-04-08  7:52           ` Eric Naim
2025-04-08 17:09             ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 11/17] refcount: introduce __refcount_{add|inc}_not_zero_limited Suren Baghdasaryan
2025-01-08  9:16   ` Vlastimil Babka
2025-01-08 15:06     ` Matthew Wilcox
2025-01-08 15:45       ` Suren Baghdasaryan
2025-01-10 13:32       ` David Laight
2025-01-10 16:29         ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 12/17] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-01-06  0:38   ` Wei Yang
2025-01-06 17:26     ` Suren Baghdasaryan
2025-01-07 18:44   ` Liam R. Howlett
2025-01-07 19:38     ` Suren Baghdasaryan
2025-01-08 11:52   ` Vlastimil Babka
2025-01-08 17:53     ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 13/17] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
2024-12-26 19:40   ` kernel test robot
2024-12-26 19:51     ` Suren Baghdasaryan
2024-12-26 19:54       ` Suren Baghdasaryan
2024-12-26 20:04         ` Suren Baghdasaryan
2024-12-26 20:13   ` kernel test robot
2024-12-26 17:07 ` [PATCH v7 14/17] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2025-01-08 18:04   ` Vlastimil Babka
2024-12-26 17:07 ` [PATCH v7 15/17] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2025-01-08 18:05   ` Vlastimil Babka
2024-12-26 17:07 ` [PATCH v7 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2025-01-08 14:55   ` Liam R. Howlett
2025-01-08 18:21   ` Vlastimil Babka
2025-01-08 18:44     ` Suren Baghdasaryan
2025-01-08 19:00       ` Vlastimil Babka
2025-01-08 19:17         ` Suren Baghdasaryan
2024-12-26 17:07 ` [PATCH v7 17/17] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-01-08 15:46   ` Liam R. Howlett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241226170710.1159679-11-surenb@google.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@android.com \
    --cc=klarasmodin@gmail.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lokeshgidra@google.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=minchan@google.com \
    --cc=mjguzik@gmail.com \
    --cc=oleg@redhat.com \
    --cc=oliver.sang@intel.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulmck@kernel.org \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=shakeel.butt@linux.dev \
    --cc=souravpanda@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox