* [PATCH 0/9] mmap read-around and readahead take 2 [not found] <20071222013147.897522982@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu [not found] ` <20071222013314.546311527@mail.ustc.edu.cn> ` (8 subsequent siblings) 9 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel Andrew, Here are the mmap read-around related patches initiated by Linus. They are for linux-2.6.24-rc5-mm1. They're mainly about code cleanups. The only major new feature - auto detection and early readahead for mmap sequential reads - shows about 2% speedup on single stream case, and should perform much better in multiple streams case. This take: simplified patch 2, from mm/filemap.c | 192 +++++++++++++++++++++++++++++++++++++++------------------- 1 files changed, 130 insertions(+), 62 deletions(-) to mm/filemap.c | 156 +++++++++++++++++++++++++++---------------------- 1 file changed, 89 insertions(+), 67 deletions(-) [PATCH 1/9] readahead: simplify readahead call scheme [PATCH 2/9] readahead: clean up and simplify the code for filemap page fault readahead [PATCH 3/9] readahead: auto detection of sequential mmap reads [PATCH 4/9] readahead: quick startup on sequential mmap readahead [PATCH 5/9] readahead: make ra_submit() non-static [PATCH 6/9] readahead: save mmap read-around states in file_ra_state [PATCH 7/9] readahead: remove unused do_page_cache_readahead() [PATCH 8/9] readahead: move max_sane_readahead() calls into force_page_cache_readahead() [PATCH 9/9] readahead: call max_sane_readahead() in ondemand_readahead() Thank you, Fengguang -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013314.546311527@mail.ustc.edu.cn>]
* [PATCH 1/9] readahead: simplify readahead call scheme [not found] ` <20071222013314.546311527@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-merge-sync-async.patch --] [-- Type: text/plain, Size: 1668 bytes --] It is insane and error-prone to insist on the call sites to check for async readahead after doing any sync one. I.e. whenever someone do a sync readahead: if (!page) page_cache_sync_readahead(...); He must try async readahead, too: page = find_get_page(...); if (PageReadahead(page)) page_cache_async_readahead(...); The tricky point is that PG_readahead could be set by a sync readahead for the _current_ newly faulted in page, and the readahead code simply expects one more callback to handle it. If the caller fails to do so, it will miss the PG_readahead bits and never able to start an async readahead. Avoid it by piggy-backing the async part _inside_ the readahead code. Now if an async readahead should be started immediately after a sync one, the readahead logic itself will do it. So the following code becomes valid: if (!page) page_cache_sync_readahead(...); else if (PageReadahead(page)) page_cache_async_readahead(...); Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/readahead.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -402,6 +402,14 @@ ondemand_readahead(struct address_space ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; readit: + /* + * An async readahead should be triggered immediately. + * Instead of demanding all call sites to check for async readahead + * immediate after a sync one, start the async part now and here. + */ + if (!hit_readahead_marker && ra->size == ra->async_size) + ra->size *= 2; + return ra_submit(ra, mapping, filp); } -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013314.688868252@mail.ustc.edu.cn>]
* [PATCH 2/9] readahead: clean up and simplify the code for filemap page fault readahead [not found] ` <20071222013314.688868252@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, Andrew Morton, linux-kernel [-- Attachment #1: readahead-standalone-mmap-readaround.patch --] [-- Type: text/plain, Size: 8677 bytes --] From: Linus Torvalds <torvalds@linux-foundation.org> This shouldn't really change behavior all that much, but the single rather complex function with read-ahead inside a loop etc is broken up into more manageable pieces. The behaviour is also less subtle, with the read-ahead being done up-front rather than inside some subtle loop and thus avoiding the now unnecessary extra state variables (ie "did_readaround" is gone). Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Fengguang Wu <wfg@mail.ustc.edu.cn> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> --- Ok, so this is something I did in Mexico when I wasn't scuba-diving, and was "watching" the kids at the pool. It was brought on by looking at git mmap file behaviour under cold-cache behaviour: git does ok, but my laptop disk is really slow, and I tried to verify that the kernel did a reasonable job of read-ahead when taking page faults. I think it did, but quite frankly, the filemap_fault() code was totally unreadable. So this separates out the read-ahead cases, and adds more comments, and also changes it so that we do asynchronous read-ahead *before* we actually wait for the page we are waiting for to become unlocked. Not that it seems to make any real difference on my laptop, but I really hated how it was doing a page = get_lock_page(..) and then doing read-ahead after that: which just guarantees that we have to wait for any out-standing IO on "page" to complete before we can even submit any new read-ahead! That just seems totally broken! So it replaces the "get_lock_page()" at the top with a broken-out page cache lookup, which allows us to look at the page state flags and make appropriate decisions on what we should do without waiting for the locked bit to clear. It does add many more lines than it removes: mm/filemap.c | 192 +++++++++++++++++++++++++++++++++++++++------------------- 1 files changed, 130 insertions(+), 62 deletions(-) but that's largely due to (a) the new function headers etc due to the split-up and (b) new or extended comments especially about the helper functions. The code, in many ways, is actually simpler, apart from the fairly trivial expansion of the equivalent of "get_lock_page()" into the function. Comments? I tried to avoid changing the read-ahead logic itself, although the old code did some strange things like doing *both* async readahead and then looking up the page and doing sync readahead (which I think was just due to the code being so damn messily organized, not on purpose). Linus --- mm/filemap.c | 156 +++++++++++++++++++++++++++---------------------- 1 file changed, 89 insertions(+), 67 deletions(-) --- linux-2.6.24-rc5-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc5-mm1/mm/filemap.c @@ -1302,6 +1302,68 @@ static int fastcall page_cache_read(stru #define MMAP_LOTSAMISS (100) +/* + * Synchronous readahead happens when we don't even find + * a page in the page cache at all. + */ +static void do_sync_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + pgoff_t offset) +{ + unsigned long ra_pages; + struct address_space *mapping = file->f_mapping; + + /* If we don't want any read-ahead, don't bother */ + if (VM_RandomReadHint(vma)) + return; + + if (VM_SequentialReadHint(vma)) { + page_cache_sync_readahead(mapping, ra, file, offset, 1); + return; + } + + if (ra->mmap_miss < INT_MAX) + ra->mmap_miss++; + + /* + * Do we miss much more than hit in this file? If so, + * stop bothering with read-ahead. It will only hurt. + */ + if (ra->mmap_miss > MMAP_LOTSAMISS) + return; + + ra_pages = max_sane_readahead(ra->ra_pages); + if (ra_pages) { + pgoff_t start = 0; + + if (offset > ra_pages / 2) + start = offset - ra_pages / 2; + do_page_cache_readahead(mapping, file, start, ra_pages); + } +} + +/* + * Asynchronous readahead happens when we find the page and PG_readahead, + * so we want to possibly extend the readahead further.. + */ +static void do_async_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + struct page *page, + pgoff_t offset) +{ + struct address_space *mapping = file->f_mapping; + + /* If we don't want any read-ahead, don't bother */ + if (VM_RandomReadHint(vma)) + return; + if (ra->mmap_miss > 0) + ra->mmap_miss--; + if (PageReadahead(page)) + page_cache_async_readahead(mapping, ra, file, page, offset, 1); +} + /** * filemap_fault - read in file data for page fault handling * @vma: vma in which the fault was taken @@ -1321,78 +1383,44 @@ int filemap_fault(struct vm_area_struct struct address_space *mapping = file->f_mapping; struct file_ra_state *ra = &file->f_ra; struct inode *inode = mapping->host; + pgoff_t offset = vmf->pgoff; struct page *page; unsigned long size; - int did_readaround = 0; int ret = 0; size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; - if (vmf->pgoff >= size) + if (offset >= size) return VM_FAULT_SIGBUS; - /* If we don't want any read-ahead, don't bother */ - if (VM_RandomReadHint(vma)) - goto no_cached_page; - /* * Do we have something in the page cache already? */ -retry_find: - page = find_lock_page(mapping, vmf->pgoff); - /* - * For sequential accesses, we use the generic readahead logic. - */ - if (VM_SequentialReadHint(vma)) { - if (!page) { - page_cache_sync_readahead(mapping, ra, file, - vmf->pgoff, 1); - page = find_lock_page(mapping, vmf->pgoff); - if (!page) - goto no_cached_page; - } - if (PageReadahead(page)) { - page_cache_async_readahead(mapping, ra, file, page, - vmf->pgoff, 1); - } - } - - if (!page) { - unsigned long ra_pages; - - ra->mmap_miss++; - + page = find_get_page(mapping, offset); + if (likely(page)) { /* - * Do we miss much more than hit in this file? If so, - * stop bothering with read-ahead. It will only hurt. + * We found the page, so try async readahead before + * waiting for the lock. */ - if (ra->mmap_miss > MMAP_LOTSAMISS) - goto no_cached_page; + do_async_mmap_readahead(vma, ra, file, page, offset); + lock_page(page); - /* - * To keep the pgmajfault counter straight, we need to - * check did_readaround, as this is an inner loop. - */ - if (!did_readaround) { - ret = VM_FAULT_MAJOR; - count_vm_event(PGMAJFAULT); - } - did_readaround = 1; - ra_pages = max_sane_readahead(file->f_ra.ra_pages); - if (ra_pages) { - pgoff_t start = 0; - - if (vmf->pgoff > ra_pages / 2) - start = vmf->pgoff - ra_pages / 2; - do_page_cache_readahead(mapping, file, start, ra_pages); + /* Did it get truncated? */ + if (unlikely(page->mapping != mapping)) { + unlock_page(page); + put_page(page); + goto no_cached_page; } - page = find_lock_page(mapping, vmf->pgoff); + } else { + /* No page in the page cache at all */ + do_sync_mmap_readahead(vma, ra, file, offset); + ret = VM_FAULT_MAJOR; + count_vm_event(PGMAJFAULT); +retry_find: + page = find_lock_page(mapping, offset); if (!page) goto no_cached_page; } - if (!did_readaround) - ra->mmap_miss--; - /* * We have a locked page in the page cache, now we need to check * that it's up-to-date. If not, it is going to be due to an error. @@ -1400,19 +1428,19 @@ retry_find: if (unlikely(!PageUptodate(page))) goto page_not_uptodate; - /* Must recheck i_size under page lock */ + /* + * Found the page and have a reference on it. + * We must recheck i_size under page lock + */ size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; - if (unlikely(vmf->pgoff >= size)) { + if (unlikely(offset >= size)) { unlock_page(page); page_cache_release(page); return VM_FAULT_SIGBUS; } - /* - * Found the page and have a reference on it. - */ mark_page_accessed(page); - ra->prev_pos = (loff_t)page->index << PAGE_CACHE_SHIFT; + ra->prev_pos = (loff_t)offset << PAGE_CACHE_SHIFT; vmf->page = page; return ret | VM_FAULT_LOCKED; @@ -1421,7 +1449,7 @@ no_cached_page: * We're only likely to ever get here if MADV_RANDOM is in * effect. */ - error = page_cache_read(file, vmf->pgoff); + error = page_cache_read(file, offset); /* * The page we want has now been added to the page cache. @@ -1441,12 +1469,6 @@ no_cached_page: return VM_FAULT_SIGBUS; page_not_uptodate: - /* IO error path */ - if (!did_readaround) { - ret = VM_FAULT_MAJOR; - count_vm_event(PGMAJFAULT); - } - /* * Umm, take care of errors if the page isn't up-to-date. * Try to re-read it _once_. We do this synchronously, -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013314.837299924@mail.ustc.edu.cn>]
* [PATCH 3/9] readahead: auto detection of sequential mmap reads [not found] ` <20071222013314.837299924@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-auto-detect-mmap-sequential-reads.patch --] [-- Type: text/plain, Size: 2138 bytes --] Auto-detect sequential mmap reads and do sync/async readahead for them. The sequential mmap readahead will be triggered when - sync readahead: it's a major fault and (prev_offset==offset-1); - async readahead: minor fault on PG_readahead page with valid readahead state. It's a bit conservative to require valid readahead state for async readahead, which means we don't do readahead for interleaved reads for now, but let's make it safe for this initial try. ====== The benefits of doing readahead instead of read-around: - less I/O wait thanks to async readahead - double real I/O size and no more cache hits Some numbers on 100,000 sequential mmap reads: user system cpu total (1-1) plain -mm, 128KB readaround: 3.224 2.554 48.40% 11.838 (1-2) plain -mm, 256KB readaround: 3.170 2.392 46.20% 11.976 (2) patched -mm, 128KB readahead: 3.117 2.448 47.33% 11.607 The patched (2) has smallest total time. It has no cache hit overheads and less I/O block time(thanks to async readahead). Here the I/O size makes no much difference, since there's only one single stream. Note that (1-1)'s real I/O size is 64KB and (1-2)'s real I/O size is 128KB, since the half of the read-around pages will be cache hits. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- --- mm/filemap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- linux-2.6.24-rc5-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc5-mm1/mm/filemap.c @@ -1318,7 +1318,8 @@ static void do_sync_mmap_readahead(struc if (VM_RandomReadHint(vma)) return; - if (VM_SequentialReadHint(vma)) { + if (VM_SequentialReadHint(vma) || + offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) { page_cache_sync_readahead(mapping, ra, file, offset, 1); return; } @@ -1360,7 +1361,8 @@ static void do_async_mmap_readahead(stru return; if (ra->mmap_miss > 0) ra->mmap_miss--; - if (PageReadahead(page)) + if (PageReadahead(page) && + offset == ra->start + ra->size - ra->async_size) page_cache_async_readahead(mapping, ra, file, page, offset, 1); } -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013314.978523200@mail.ustc.edu.cn>]
* [PATCH 4/9] readahead: quick startup on sequential mmap readahead [not found] ` <20071222013314.978523200@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-sequential-quick-start.patch --] [-- Type: text/plain, Size: 851 bytes --] When the user explicitly sets MADV_SEQUENTIAL, we should really avoid the slow readahead size ramp-up phase and start full-size readahead immediately. This patch won't change behavior for the auto-detected sequential mmap reads. Its previous read-around size is ra_pages/2, so it will be doubled to the full readahead size anyway. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/filemap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- linux-2.6.24-rc5-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc5-mm1/mm/filemap.c @@ -1320,7 +1320,7 @@ static void do_sync_mmap_readahead(struc if (VM_SequentialReadHint(vma) || offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) { - page_cache_sync_readahead(mapping, ra, file, offset, 1); + page_cache_sync_readahead(mapping, ra, file, offset, ra->ra_pages); return; } -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013315.126541416@mail.ustc.edu.cn>]
* [PATCH 5/9] readahead: make ra_submit() non-static [not found] ` <20071222013315.126541416@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-export-ra_submit.patch --] [-- Type: text/plain, Size: 1086 bytes --] Make ra_submit() non-static and callable from other files. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- --- include/linux/mm.h | 3 +++ mm/readahead.c | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) --- linux-2.6.24-rc5-mm1.orig/include/linux/mm.h +++ linux-2.6.24-rc5-mm1/include/linux/mm.h @@ -1103,6 +1103,9 @@ void page_cache_async_readahead(struct a unsigned long size); unsigned long max_sane_readahead(unsigned long nr); +unsigned long ra_submit(struct file_ra_state *ra, + struct address_space *mapping, + struct file *filp); /* Do stack extension */ extern int expand_stack(struct vm_area_struct *vma, unsigned long address); --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -242,7 +242,7 @@ subsys_initcall(readahead_init); /* * Submit IO for the read-ahead request in file_ra_state. */ -static unsigned long ra_submit(struct file_ra_state *ra, +unsigned long ra_submit(struct file_ra_state *ra, struct address_space *mapping, struct file *filp) { int actual; -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013315.268922128@mail.ustc.edu.cn>]
* [PATCH 6/9] readahead: save mmap read-around states in file_ra_state [not found] ` <20071222013315.268922128@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-convert-mmap-readaround.patch --] [-- Type: text/plain, Size: 825 bytes --] Change mmap read-around to share the same code style and data structure with readahead code. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/filemap.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- linux-2.6.24-rc5-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc5-mm1/mm/filemap.c @@ -1334,13 +1334,15 @@ static void do_sync_mmap_readahead(struc if (ra->mmap_miss > MMAP_LOTSAMISS) return; + /* + * mmap read-around + */ ra_pages = max_sane_readahead(ra->ra_pages); if (ra_pages) { - pgoff_t start = 0; - - if (offset > ra_pages / 2) - start = offset - ra_pages / 2; - do_page_cache_readahead(mapping, file, start, ra_pages); + ra->start = max_t(long, 0, offset - ra_pages / 2); + ra->size = ra_pages; + ra->async_size = 0; + ra_submit(ra, mapping, file); } } -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013315.408285407@mail.ustc.edu.cn>]
* [PATCH 7/9] readahead: remove unused do_page_cache_readahead() [not found] ` <20071222013315.408285407@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-remove-do_page_cache_readahead.patch --] [-- Type: text/plain, Size: 1799 bytes --] Remove do_page_cache_readahead(). Its last user, mmap read-around, has been changed to call ra_submit(). Also, the no-readahead-if-congested logic is not appropriate here. Raw 1-page reads can only makes things painfully slower, and users are pretty sensitive about the slow loading of executables. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/mm.h | 2 -- mm/readahead.c | 16 ---------------- 2 files changed, 18 deletions(-) --- linux-2.6.24-rc5-mm1.orig/include/linux/mm.h +++ linux-2.6.24-rc5-mm1/include/linux/mm.h @@ -1084,8 +1084,6 @@ int write_one_page(struct page *page, in #define VM_MAX_READAHEAD 128 /* kbytes */ #define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */ -int do_page_cache_readahead(struct address_space *mapping, struct file *filp, - pgoff_t offset, unsigned long nr_to_read); int force_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read); --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -208,22 +208,6 @@ int force_page_cache_readahead(struct ad } /* - * This version skips the IO if the queue is read-congested, and will tell the - * block layer to abandon the readahead if request allocation would block. - * - * force_page_cache_readahead() will ignore queue congestion and will block on - * request queues. - */ -int do_page_cache_readahead(struct address_space *mapping, struct file *filp, - pgoff_t offset, unsigned long nr_to_read) -{ - if (bdi_read_congested(mapping->backing_dev_info)) - return -1; - - return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0); -} - -/* * Given a desired number of PAGE_CACHE_SIZE readahead pages, return a * sensible upper limit. */ -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013315.557435210@mail.ustc.edu.cn>]
* [PATCH 8/9] readahead: move max_sane_readahead() calls into force_page_cache_readahead() [not found] ` <20071222013315.557435210@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-check-max_sane_readahead.patch --] [-- Type: text/plain, Size: 1729 bytes --] Simplify code by moving max_sane_readahead() calls into force_page_cache_readahead(). Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/fadvise.c | 2 +- mm/filemap.c | 3 +-- mm/madvise.c | 3 +-- mm/readahead.c | 1 + 4 files changed, 4 insertions(+), 5 deletions(-) --- linux-2.6.24-rc5-mm1.orig/mm/fadvise.c +++ linux-2.6.24-rc5-mm1/mm/fadvise.c @@ -89,7 +89,7 @@ asmlinkage long sys_fadvise64_64(int fd, ret = force_page_cache_readahead(mapping, file, start_index, - max_sane_readahead(nrpages)); + nrpages); if (ret > 0) ret = 0; break; --- linux-2.6.24-rc5-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc5-mm1/mm/filemap.c @@ -1242,8 +1242,7 @@ do_readahead(struct address_space *mappi if (!mapping || !mapping->a_ops || !mapping->a_ops->readpage) return -EINVAL; - force_page_cache_readahead(mapping, filp, index, - max_sane_readahead(nr)); + force_page_cache_readahead(mapping, filp, index, nr); return 0; } --- linux-2.6.24-rc5-mm1.orig/mm/madvise.c +++ linux-2.6.24-rc5-mm1/mm/madvise.c @@ -123,8 +123,7 @@ static long madvise_willneed(struct vm_a end = vma->vm_end; end = ((end - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - force_page_cache_readahead(file->f_mapping, - file, start, max_sane_readahead(end - start)); + force_page_cache_readahead(file->f_mapping, file, start, end - start); return 0; } --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -187,6 +187,7 @@ int force_page_cache_readahead(struct ad if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages)) return -EINVAL; + nr_to_read = max_sane_readahead(nr_to_read); while (nr_to_read) { int err; -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071222013315.699008042@mail.ustc.edu.cn>]
* [PATCH 9/9] readahead: call max_sane_readahead() in ondemand_readahead() [not found] ` <20071222013315.699008042@mail.ustc.edu.cn> @ 2007-12-22 1:31 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-22 1:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, Nick Piggin, linux-kernel [-- Attachment #1: readahead-check-max_sane_readahead2.patch --] [-- Type: text/plain, Size: 747 bytes --] Apply the max_sane_readahead() limit in ondemand_readahead(). Just in case someone aggressively set a huge readahead size. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/readahead.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -324,9 +324,9 @@ ondemand_readahead(struct address_space bool hit_readahead_marker, pgoff_t offset, unsigned long req_size) { - int max = ra->ra_pages; /* max readahead pages */ pgoff_t prev_offset; - int sequential; + int sequential; + int max = max_sane_readahead(ra->ra_pages); /* max readahead pages */ /* * It's the expected callback offset, assume sequential access. -- ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <20071216115927.986126305@mail.ustc.edu.cn>]
[parent not found: <20071216120418.639781633@mail.ustc.edu.cn>]
* [PATCH 8/9] readahead: move max_sane_readahead() calls into force_page_cache_readahead() [not found] ` <20071216120418.639781633@mail.ustc.edu.cn> @ 2007-12-16 11:59 ` Fengguang Wu 0 siblings, 0 replies; 11+ messages in thread From: Fengguang Wu @ 2007-12-16 11:59 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: readahead-check-max_sane_readahead.patch --] [-- Type: text/plain, Size: 1729 bytes --] Simplify code by moving max_sane_readahead() calls into force_page_cache_readahead(). Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/fadvise.c | 2 +- mm/filemap.c | 3 +-- mm/madvise.c | 3 +-- mm/readahead.c | 1 + 4 files changed, 4 insertions(+), 5 deletions(-) --- linux-2.6.24-rc4-mm1.orig/mm/fadvise.c +++ linux-2.6.24-rc4-mm1/mm/fadvise.c @@ -89,7 +89,7 @@ asmlinkage long sys_fadvise64_64(int fd, ret = force_page_cache_readahead(mapping, file, start_index, - max_sane_readahead(nrpages)); + nrpages); if (ret > 0) ret = 0; break; --- linux-2.6.24-rc4-mm1.orig/mm/filemap.c +++ linux-2.6.24-rc4-mm1/mm/filemap.c @@ -1242,8 +1242,7 @@ do_readahead(struct address_space *mappi if (!mapping || !mapping->a_ops || !mapping->a_ops->readpage) return -EINVAL; - force_page_cache_readahead(mapping, filp, index, - max_sane_readahead(nr)); + force_page_cache_readahead(mapping, filp, index, nr); return 0; } --- linux-2.6.24-rc4-mm1.orig/mm/madvise.c +++ linux-2.6.24-rc4-mm1/mm/madvise.c @@ -123,8 +123,7 @@ static long madvise_willneed(struct vm_a end = vma->vm_end; end = ((end - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - force_page_cache_readahead(file->f_mapping, - file, start, max_sane_readahead(end - start)); + force_page_cache_readahead(file->f_mapping, file, start, end - start); return 0; } --- linux-2.6.24-rc4-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc4-mm1/mm/readahead.c @@ -187,6 +187,7 @@ int force_page_cache_readahead(struct ad if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages)) return -EINVAL; + nr_to_read = max_sane_readahead(nr_to_read); while (nr_to_read) { int err; -- ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2007-12-22 1:36 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20071222013147.897522982@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 0/9] mmap read-around and readahead take 2 Fengguang Wu
[not found] ` <20071222013314.546311527@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 1/9] readahead: simplify readahead call scheme Fengguang Wu
[not found] ` <20071222013314.688868252@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 2/9] readahead: clean up and simplify the code for filemap page fault readahead Fengguang Wu
[not found] ` <20071222013314.837299924@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 3/9] readahead: auto detection of sequential mmap reads Fengguang Wu
[not found] ` <20071222013314.978523200@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 4/9] readahead: quick startup on sequential mmap readahead Fengguang Wu
[not found] ` <20071222013315.126541416@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 5/9] readahead: make ra_submit() non-static Fengguang Wu
[not found] ` <20071222013315.268922128@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 6/9] readahead: save mmap read-around states in file_ra_state Fengguang Wu
[not found] ` <20071222013315.408285407@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 7/9] readahead: remove unused do_page_cache_readahead() Fengguang Wu
[not found] ` <20071222013315.557435210@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 8/9] readahead: move max_sane_readahead() calls into force_page_cache_readahead() Fengguang Wu
[not found] ` <20071222013315.699008042@mail.ustc.edu.cn>
2007-12-22 1:31 ` [PATCH 9/9] readahead: call max_sane_readahead() in ondemand_readahead() Fengguang Wu
[not found] <20071216115927.986126305@mail.ustc.edu.cn>
[not found] ` <20071216120418.639781633@mail.ustc.edu.cn>
2007-12-16 11:59 ` [PATCH 8/9] readahead: move max_sane_readahead() calls into force_page_cache_readahead() Fengguang Wu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox