From: Rik van Riel <riel@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Lee Schermerhorn <lee.schermerhorn@hp.com>
Subject: [patch 16/19] mlock vma pages under mmap_sem held for read
Date: Tue, 08 Jan 2008 15:59:55 -0500 [thread overview]
Message-ID: <20080108210015.042432392@redhat.com> (raw)
In-Reply-To: 20080108205939.323955454@redhat.com
[-- Attachment #1: noreclaim-04.1a-lock-vma-pages-under-read-lock.patch --]
[-- Type: text/plain, Size: 6906 bytes --]
V2 -> V3:
+ rebase to 23-mm1 atop RvR's split lru series [no change]
+ fix function return types [void -> int] to fix build when
not configured.
New in V2.
We need to hold the mmap_sem for write to initiatate mlock()/munlock()
because we may need to merge/split vmas. However, this can lead to
very long lock hold times attempting to fault in a large memory region
to mlock it into memory. This can hold off other faults against the
mm [multithreaded tasks] and other scans of the mm, such as via /proc.
To alleviate this, downgrade the mmap_sem to read mode during the
population of the region for locking. This is especially the case
if we need to reclaim memory to lock down the region. We [probably?]
don't need to do this for unlocking as all of the pages should be
resident--they're already mlocked.
Now, the caller's of the mlock functions [mlock_fixup() and
mlock_vma_pages_range()] expect the mmap_sem to be returned in write
mode. Changing all callers appears to be way too much effort at this
point. So, restore write mode before returning. Note that this opens
a window where the mmap list could change in a multithreaded process.
So, at least for mlock_fixup(), where we could be called in a loop over
multiple vmas, we check that a vma still exists at the start address
and that vma still covers the page range [start,end). If not, we return
an error, -EAGAIN, and let the caller deal with it.
Return -EAGAIN from mlock_vma_pages_range() function and mlock_fixup()
if the vma at 'start' disappears or changes so that the page range
[start,end) is no longer contained in the vma. Again, let the caller
deal with it. Looks like only sys_remap_file_pages() [via mmap_region()]
should actually care.
With this patch, I no longer see processes like ps(1) blocked for seconds
or minutes at a time waiting for a large [multiple gigabyte] region to be
locked down.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Index: linux-2.6.24-rc6-mm1/mm/mlock.c
===================================================================
--- linux-2.6.24-rc6-mm1.orig/mm/mlock.c 2008-01-02 14:59:18.000000000 -0500
+++ linux-2.6.24-rc6-mm1/mm/mlock.c 2008-01-02 15:06:32.000000000 -0500
@@ -200,6 +200,37 @@ int __mlock_vma_pages_range(struct vm_ar
return ret;
}
+/**
+ * mlock_vma_pages_range
+ * @vma - vm area to mlock into memory
+ * @start - start address in @vma of range to mlock,
+ * @end - end address in @vma of range
+ *
+ * Called with current->mm->mmap_sem held write locked. Downgrade to read
+ * for faulting in pages. This can take a looong time for large segments.
+ *
+ * We need to restore the mmap_sem to write locked because our callers'
+ * callers expect this. However, because the mmap could have changed
+ * [in a multi-threaded process], we need to recheck.
+ */
+int mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+
+ downgrade_write(&mm->mmap_sem);
+ __mlock_vma_pages_range(vma, start, end, 1);
+
+ up_read(&mm->mmap_sem);
+ /* vma can change or disappear */
+ down_write(&mm->mmap_sem);
+ vma = find_vma(mm, start);
+ /* non-NULL vma must contain @start, but need to check @end */
+ if (!vma || end > vma->vm_end)
+ return -EAGAIN;
+ return 0;
+}
+
#else /* CONFIG_NORECLAIM_MLOCK */
/*
@@ -266,14 +297,38 @@ success:
mm->locked_vm += nr_pages;
/*
- * vm_flags is protected by the mmap_sem held in write mode.
+ * vm_flags is protected by the mmap_sem held for write.
* It's okay if try_to_unmap_one unmaps a page just after we
* set VM_LOCKED, __mlock_vma_pages_range will bring it back.
*/
vma->vm_flags = newflags;
+ /*
+ * mmap_sem is currently held for write. If we're locking pages,
+ * downgrade the write lock to a read lock so that other faults,
+ * mmap scans, ... while we fault in all pages.
+ */
+ if (lock)
+ downgrade_write(&mm->mmap_sem);
+
__mlock_vma_pages_range(vma, start, end, lock);
+ if (lock) {
+ /*
+ * Need to reacquire mmap sem in write mode, as our callers
+ * expect this. We have no support for atomically upgrading
+ * a sem to write, so we need to check for changes while sem
+ * is unlocked.
+ */
+ up_read(&mm->mmap_sem);
+ /* vma can change or disappear */
+ down_write(&mm->mmap_sem);
+ *prev = find_vma(mm, start);
+ /* non-NULL *prev must contain @start, but need to check @end */
+ if (!(*prev) || end > (*prev)->vm_end)
+ ret = -EAGAIN;
+ }
+
out:
if (ret == -ENOMEM)
ret = -EAGAIN;
Index: linux-2.6.24-rc6-mm1/mm/internal.h
===================================================================
--- linux-2.6.24-rc6-mm1.orig/mm/internal.h 2008-01-02 14:58:22.000000000 -0500
+++ linux-2.6.24-rc6-mm1/mm/internal.h 2008-01-02 15:07:37.000000000 -0500
@@ -61,24 +61,21 @@ extern int __mlock_vma_pages_range(struc
/*
* mlock all pages in this vma range. For mmap()/mremap()/...
*/
-static inline void mlock_vma_pages_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end)
-{
- __mlock_vma_pages_range(vma, start, end, 1);
-}
+extern int mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
/*
* munlock range of pages. For munmap() and exit().
* Always called to operate on a full vma that is being unmapped.
*/
-static inline void munlock_vma_pages_range(struct vm_area_struct *vma,
+static inline int munlock_vma_pages_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
// TODO: verify my assumption. Should we just drop the start/end args?
VM_BUG_ON(start != vma->vm_start || end != vma->vm_end);
vma->vm_flags &= ~VM_LOCKED; /* try_to_unlock() needs this */
- __mlock_vma_pages_range(vma, start, end, 0);
+ return __mlock_vma_pages_range(vma, start, end, 0);
}
extern void clear_page_mlock(struct page *page);
@@ -90,10 +87,10 @@ static inline int is_mlocked_vma(struct
}
static inline void clear_page_mlock(struct page *page) { }
static inline void mlock_vma_page(struct page *page) { }
-static inline void mlock_vma_pages_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end) { }
-static inline void munlock_vma_pages_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end) { }
+static inline int mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end) { return 0; }
+static inline int munlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end) { return 0; }
#endif /* CONFIG_NORECLAIM_MLOCK */
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-01-08 20:59 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-08 20:59 [patch 00/19] VM pageout scalability improvements Rik van Riel
2008-01-08 20:59 ` [patch 01/19] move isolate_lru_page() to vmscan.c Rik van Riel
2008-01-08 22:03 ` Christoph Lameter
2008-01-08 20:59 ` [patch 02/19] free swap space on swap-in/activation Rik van Riel
2008-01-08 22:10 ` Christoph Lameter
2008-01-08 20:59 ` [patch 03/19] define page_file_cache() function Rik van Riel
2008-01-08 22:18 ` Christoph Lameter
2008-01-08 22:28 ` Rik van Riel
2008-01-09 4:26 ` KAMEZAWA Hiroyuki
2008-01-08 20:59 ` [patch 04/19] Use an indexed array for LRU variables Rik van Riel
2008-01-08 20:59 ` [patch 05/19] split LRU lists into anon & file sets Rik van Riel
2008-01-08 22:22 ` Christoph Lameter
2008-01-08 22:36 ` Rik van Riel
2008-01-08 22:42 ` Christoph Lameter
2008-01-09 2:45 ` Rik van Riel
2008-01-09 4:41 ` KAMEZAWA Hiroyuki
2008-01-10 2:21 ` Balbir Singh
2008-01-10 2:36 ` KAMEZAWA Hiroyuki
2008-01-10 3:26 ` Balbir Singh
2008-01-10 4:23 ` KAMEZAWA Hiroyuki
2008-01-10 2:28 ` KAMEZAWA Hiroyuki
2008-01-10 2:37 ` Rik van Riel
2008-01-11 3:59 ` KOSAKI Motohiro
2008-01-11 15:37 ` Rik van Riel
2008-01-11 6:24 ` KOSAKI Motohiro
2008-01-11 15:42 ` Rik van Riel
2008-01-11 15:59 ` Lee Schermerhorn
2008-01-11 16:15 ` Rik van Riel
2008-01-11 19:51 ` Lee Schermerhorn
2008-01-11 15:50 ` Lee Schermerhorn
2008-01-11 16:06 ` Rik van Riel
2008-01-11 7:35 ` KOSAKI Motohiro
2008-01-11 15:46 ` Rik van Riel
2008-01-14 23:57 ` KOSAKI Motohiro
2008-01-30 3:25 ` KOSAKI Motohiro
2008-01-30 8:57 ` KOSAKI Motohiro
2008-01-30 14:29 ` Lee Schermerhorn
2008-01-31 1:17 ` KOSAKI Motohiro
2008-01-31 10:48 ` Rik van Riel
2008-01-31 10:59 ` KOSAKI Motohiro
2008-02-07 0:35 ` Rik van Riel
2008-02-07 1:20 ` KOSAKI Motohiro
2008-02-07 1:36 ` Rik van Riel
2008-01-08 20:59 ` [patch 06/19] SEQ replacement for anonymous pages Rik van Riel
2008-01-08 20:59 ` [patch 07/19] (NEW) add some sanity checks to get_scan_ratio Rik van Riel
2008-01-09 4:16 ` KAMEZAWA Hiroyuki
2008-01-09 12:53 ` Rik van Riel
2008-01-08 20:59 ` [patch 08/19] add newly swapped in pages to the inactive list Rik van Riel
2008-01-08 22:28 ` Christoph Lameter
2008-01-08 20:59 ` [patch 09/19] (NEW) more aggressively use lumpy reclaim Rik van Riel
2008-01-08 22:30 ` Christoph Lameter
2008-01-14 15:28 ` Mel Gorman
2008-01-08 20:59 ` [patch 10/19] No Reclaim LRU Infrastructure Rik van Riel
2008-01-11 4:36 ` KOSAKI Motohiro
2008-01-11 15:43 ` Lee Schermerhorn
2008-01-15 0:06 ` KOSAKI Motohiro
2008-01-08 20:59 ` [patch 11/19] Non-reclaimable page statistics Rik van Riel
2008-01-08 20:59 ` [patch 12/19] scan noreclaim list for reclaimable pages Rik van Riel
2008-01-08 20:59 ` [patch 13/19] ramfs pages are non-reclaimable Rik van Riel
2008-01-08 20:59 ` [patch 14/19] SHM_LOCKED pages are nonreclaimable Rik van Riel
2008-01-08 20:59 ` [patch 15/19] non-reclaimable mlocked pages Rik van Riel
2008-01-08 20:59 ` Rik van Riel [this message]
2008-01-08 20:59 ` [patch 17/19] handle mlocked pages during map/unmap and truncate Rik van Riel
2008-01-08 20:59 ` [patch 18/19] account mlocked pages Rik van Riel
2008-01-11 12:51 ` Balbir Singh
2008-01-13 5:18 ` Rik van Riel
2008-01-08 20:59 ` [patch 19/19] cull non-reclaimable anon pages from the LRU at fault time Rik van Riel
2008-01-10 4:39 ` [patch 00/19] VM pageout scalability improvements Mike Snitzer
2008-01-10 15:41 ` Rik van Riel
2008-01-10 16:08 ` Mike Snitzer
2008-01-11 10:41 ` Balbir Singh
2008-01-11 15:38 ` Rik van Riel
2008-01-11 11:47 ` Balbir Singh
2008-01-16 6:17 ` rvr split LRU minor regression ? KOSAKI Motohiro
-- strict thread matches above, loose matches on Subject: below --
2008-01-02 22:41 [patch 00/19] VM pageout scalability improvements linux-kernel
2008-01-02 22:42 ` [patch 16/19] mlock vma pages under mmap_sem held for read linux-kernel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080108210015.042432392@redhat.com \
--to=riel@redhat.com \
--cc=lee.schermerhorn@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).