* [PATCH 00/10] readahead cleanups and interleaved readahead take 4 [not found] <20070724020009.677809022@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu [not found] ` <20070724020041.774421091@mail.ustc.edu.cn> ` (9 subsequent siblings) 10 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel Andrew, Here are some more readahead related cleanups and updates. smaller file_ra_state: [PATCH 01/10] readahead: compacting file_ra_state [PATCH 02/10] readahead: mmap read-around simplification [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos Interleaved readahead: [PATCH 04/10] radixtree: introduce radix_tree_scan_hole() [PATCH 05/10] readahead: basic support of interleaved reads Readahead cleanups: [PATCH 06/10] readahead: remove several readahead macros [PATCH 07/10] readahead: remove the limit max_sectors_kb imposed on max_readahead_kb [PATCH 08/10] readahead: remove the local copy of ra in do_generic_mapping_read() Filemap cleanups: [PATCH 09/10] filemap: trivial code cleanups [PATCH 10/10] filemap: convert some unsigned long to pgoff_t The diffstat is block/ll_rw_blk.c | 9 ---- fs/ext3/dir.c | 2 - fs/ext4/dir.c | 2 - fs/splice.c | 2 - include/linux/fs.h | 14 +++---- include/linux/mm.h | 2 - include/linux/pagemap.h | 23 ++++++------ include/linux/radix-tree.h | 2 + lib/radix-tree.c | 36 +++++++++++++++++++ mm/filemap.c | 65 ++++++++++++++++------------------- mm/readahead.c | 58 +++++++++++++++++-------------- 11 files changed, 122 insertions(+), 93 deletions(-) Regards, Fengguang Wu -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020041.774421091@mail.ustc.edu.cn>]
* [PATCH 01/10] readahead: compacting file_ra_state [not found] ` <20070724020041.774421091@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Andi Kleen [-- Attachment #1: short-rasize.patch --] [-- Type: text/plain, Size: 2010 bytes --] Use 'unsigned int' instead of 'unsigned long' for readahead sizes. This helps reduce memory consumption on 64bit CPU when a lot of files are opened. CC: Andi Kleen <andi@firstfloor.org> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/fs.h | 8 ++++---- mm/filemap.c | 2 +- mm/readahead.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/fs.h +++ linux-2.6.22-rc6-mm1/include/linux/fs.h @@ -771,12 +771,12 @@ struct fown_struct { * Track a single file's readahead state */ struct file_ra_state { - pgoff_t start; /* where readahead started */ - unsigned long size; /* # of readahead pages */ - unsigned long async_size; /* do asynchronous readahead when + pgoff_t start; /* where readahead started */ + unsigned int size; /* # of readahead pages */ + unsigned int async_size; /* do asynchronous readahead when there are only # of pages ahead */ - unsigned long ra_pages; /* Maximum readahead window */ + unsigned int ra_pages; /* Maximum readahead window */ unsigned long mmap_hit; /* Cache hit stat for mmap accesses */ unsigned long mmap_miss; /* Cache miss stat for mmap accesses */ unsigned long prev_index; /* Cache last read() position */ --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -840,7 +840,7 @@ static void shrink_readahead_size_eio(st if (count > 5) return; count++; - printk(KERN_WARNING "Reducing readahead size to %luK\n", + printk(KERN_WARNING "Reducing readahead size to %dK\n", ra->ra_pages << (PAGE_CACHE_SHIFT - 10)); } --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -342,7 +342,7 @@ ondemand_readahead(struct address_space bool hit_readahead_marker, pgoff_t offset, unsigned long req_size) { - unsigned long max; /* max readahead pages */ + int max; /* max readahead pages */ int sequential; max = ra->ra_pages; -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.028909529@mail.ustc.edu.cn>]
* [PATCH 02/10] readahead: mmap read-around simplification [not found] ` <20070724020042.028909529@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel [-- Attachment #1: remove-mmap-hit.patch --] [-- Type: text/plain, Size: 1360 bytes --] Fold file_ra_state.mmap_hit into file_ra_state.mmap_miss and make it an int. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/fs.h | 3 +-- mm/filemap.c | 4 ++-- 2 files changed, 3 insertions(+), 4 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/fs.h +++ linux-2.6.22-rc6-mm1/include/linux/fs.h @@ -777,8 +777,7 @@ struct file_ra_state { there are only # of pages ahead */ unsigned int ra_pages; /* Maximum readahead window */ - unsigned long mmap_hit; /* Cache hit stat for mmap accesses */ - unsigned long mmap_miss; /* Cache miss stat for mmap accesses */ + int mmap_miss; /* Cache miss stat for mmap accesses */ unsigned long prev_index; /* Cache last read() position */ unsigned int prev_offset; /* Offset where last read() ended in a page */ }; --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -1389,7 +1389,7 @@ retry_find: * Do we miss much more than hit in this file? If so, * stop bothering with read-ahead. It will only hurt. */ - if (ra->mmap_miss > ra->mmap_hit + MMAP_LOTSAMISS) + if (ra->mmap_miss > MMAP_LOTSAMISS) goto no_cached_page; /* @@ -1415,7 +1415,7 @@ retry_find: } if (!did_readaround) - ra->mmap_hit++; + ra->mmap_miss--; /* * We have a locked page in the page cache, now we need to check -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.135275161@mail.ustc.edu.cn>]
* [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos [not found] ` <20070724020042.135275161@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 2007-07-24 3:52 ` Andrew Morton 2007-07-24 3:55 ` Andrew Morton 0 siblings, 2 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Peter Zijlstra [-- Attachment #1: merge-start-prev_index.patch --] [-- Type: text/plain, Size: 4936 bytes --] Combine the file_ra_state members unsigned long prev_index unsigned int prev_offset into loff_t prev_pos It is more consistent and better supports huge files. Thanks to Peter for the nice proposal! Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- fs/ext3/dir.c | 2 +- fs/ext4/dir.c | 2 +- fs/splice.c | 2 +- include/linux/fs.h | 3 +-- mm/filemap.c | 11 ++++++----- mm/readahead.c | 15 ++++++++------- 6 files changed, 18 insertions(+), 17 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/fs.h +++ linux-2.6.22-rc6-mm1/include/linux/fs.h @@ -778,8 +778,7 @@ struct file_ra_state { unsigned int ra_pages; /* Maximum readahead window */ int mmap_miss; /* Cache miss stat for mmap accesses */ - unsigned long prev_index; /* Cache last read() position */ - unsigned int prev_offset; /* Offset where last read() ended in a page */ + loff_t prev_pos; /* Cache last read() position */ }; /* --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -881,8 +881,8 @@ void do_generic_mapping_read(struct addr index = *ppos >> PAGE_CACHE_SHIFT; next_index = index; - prev_index = ra.prev_index; - prev_offset = ra.prev_offset; + prev_index = ra.prev_pos >> PAGE_CACHE_SHIFT; + prev_offset = ra.prev_pos & (PAGE_CACHE_SIZE-1); last_index = (*ppos + desc->count + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; offset = *ppos & ~PAGE_CACHE_MASK; @@ -968,7 +968,6 @@ page_ok: index += offset >> PAGE_CACHE_SHIFT; offset &= ~PAGE_CACHE_MASK; prev_offset = offset; - ra.prev_offset = offset; page_cache_release(page); if (ret == nr && desc->count) @@ -1055,7 +1054,9 @@ no_cached_page: out: *_ra = ra; - _ra->prev_index = prev_index; + _ra->prev_pos = prev_index; + _ra->prev_pos <<= PAGE_CACHE_SHIFT; + _ra->prev_pos |= prev_offset; *ppos = ((loff_t) index << PAGE_CACHE_SHIFT) + offset; if (filp) @@ -1435,7 +1436,7 @@ retry_find: * Found the page and have a reference on it. */ mark_page_accessed(page); - ra->prev_index = page->index; + ra->prev_pos = page->index << PAGE_CACHE_SHIFT; return page; outside_data_content: --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -45,7 +45,7 @@ void file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping) { ra->ra_pages = mapping->backing_dev_info->ra_pages; - ra->prev_index = -1; + ra->prev_pos = -1; } EXPORT_SYMBOL_GPL(file_ra_state_init); @@ -318,7 +318,7 @@ static unsigned long get_next_ra_size(st * indicator. The flag won't be set on already cached pages, to avoid the * readahead-for-nothing fuss, saving pointless page cache lookups. * - * prev_index tracks the last visited page in the _previous_ read request. + * prev_pos tracks the last visited byte in the _previous_ read request. * It should be maintained by the caller, and will be used for detecting * small random reads. Note that the readahead algorithm checks loosely * for sequential patterns. Hence interleaved reads might be served as @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space bool hit_readahead_marker, pgoff_t offset, unsigned long req_size) { - int max; /* max readahead pages */ - int sequential; - - max = ra->ra_pages; - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); + int max = ra->ra_pages; /* max readahead pages */ + pgoff_t prev_offset; + int sequential; /* * It's the expected callback offset, assume sequential access. @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space goto readit; } + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; + sequential = offset - prev_offset <= 1UL || req_size > max; + /* * Standalone, small read. * Read as is, and do not pollute the readahead state. --- linux-2.6.22-rc6-mm1.orig/fs/ext3/dir.c +++ linux-2.6.22-rc6-mm1/fs/ext3/dir.c @@ -143,7 +143,7 @@ static int ext3_readdir(struct file * fi sb->s_bdev->bd_inode->i_mapping, &filp->f_ra, filp, index, 1); - filp->f_ra.prev_index = index; + filp->f_ra.prev_pos = index << PAGE_CACHE_SHIFT; bh = ext3_bread(NULL, inode, blk, 0, &err); } --- linux-2.6.22-rc6-mm1.orig/fs/ext4/dir.c +++ linux-2.6.22-rc6-mm1/fs/ext4/dir.c @@ -142,7 +142,7 @@ static int ext4_readdir(struct file * fi sb->s_bdev->bd_inode->i_mapping, &filp->f_ra, filp, index, 1); - filp->f_ra.prev_index = index; + filp->f_ra.prev_pos = index << PAGE_CACHE_SHIFT; bh = ext4_bread(NULL, inode, blk, 0, &err); } --- linux-2.6.22-rc6-mm1.orig/fs/splice.c +++ linux-2.6.22-rc6-mm1/fs/splice.c @@ -455,7 +455,7 @@ fill_it: */ while (page_nr < nr_pages) page_cache_release(pages[page_nr++]); - in->f_ra.prev_index = index; + in->f_ra.prev_pos = index << PAGE_CACHE_SHIFT; if (spd.nr_pages) return splice_to_pipe(pipe, &spd); -- ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos 2007-07-24 2:00 ` [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos Fengguang Wu @ 2007-07-24 3:52 ` Andrew Morton [not found] ` <20070724034801.GA7310@mail.ustc.edu.cn> 2007-07-24 3:55 ` Andrew Morton 1 sibling, 1 reply; 18+ messages in thread From: Andrew Morton @ 2007-07-24 3:52 UTC (permalink / raw) To: Fengguang Wu; +Cc: linux-kernel, Peter Zijlstra On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > - ra->prev_index = page->index; > + ra->prev_pos = page->index << PAGE_CACHE_SHIFT; bug! The rhs will get truncated befire it gets assigned to the lhs. Need to cast page->index to loff_t. I'll fix this one up. Please review the other patches for this? I decided to merge this ahead of that great pile of Nick's patches (pagecache write deadlocks) and gto a number of easy-to-fix rejects as a result. Hopefully it all landed OK. ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724034801.GA7310@mail.ustc.edu.cn>]
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos [not found] ` <20070724034801.GA7310@mail.ustc.edu.cn> @ 2007-07-24 3:48 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 3:48 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Peter Zijlstra On Mon, Jul 23, 2007 at 08:52:41PM -0700, Andrew Morton wrote: > On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > - ra->prev_index = page->index; > > + ra->prev_pos = page->index << PAGE_CACHE_SHIFT; > > bug! The rhs will get truncated befire it gets assigned to > the lhs. Need to cast page->index to loff_t. > > I'll fix this one up. Please review the other patches for this? Thank you. Be sure to update *all* the lines: ra->prev_pos = page->index << PAGE_CACHE_SHIFT Other places should have been taken care of. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos 2007-07-24 2:00 ` [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos Fengguang Wu 2007-07-24 3:52 ` Andrew Morton @ 2007-07-24 3:55 ` Andrew Morton [not found] ` <20070724043215.GA6317@mail.ustc.edu.cn> 1 sibling, 1 reply; 18+ messages in thread From: Andrew Morton @ 2007-07-24 3:55 UTC (permalink / raw) To: Fengguang Wu; +Cc: linux-kernel, Peter Zijlstra On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space > bool hit_readahead_marker, pgoff_t offset, > unsigned long req_size) > { > - int max; /* max readahead pages */ > - int sequential; > - > - max = ra->ra_pages; > - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); > + int max = ra->ra_pages; /* max readahead pages */ > + pgoff_t prev_offset; > + int sequential; > > /* > * It's the expected callback offset, assume sequential access. > @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space > goto readit; > } > > + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; > + sequential = offset - prev_offset <= 1UL || req_size > max; It's a bit pointless using an opaque type for prev_offset here, and then encoding the knowledge that it is implemented as "unsigned long". It's a minor thing, but perhaps just "<= 1" would make more sense here. ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724043215.GA6317@mail.ustc.edu.cn>]
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos [not found] ` <20070724043215.GA6317@mail.ustc.edu.cn> @ 2007-07-24 4:32 ` Fengguang Wu 2007-07-24 4:53 ` Andrew Morton [not found] ` <20070724043708.GA6627@mail.ustc.edu.cn> 1 sibling, 1 reply; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 4:32 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Peter Zijlstra On Mon, Jul 23, 2007 at 08:55:35PM -0700, Andrew Morton wrote: > On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space > > bool hit_readahead_marker, pgoff_t offset, > > unsigned long req_size) > > { > > - int max; /* max readahead pages */ > > - int sequential; > > - > > - max = ra->ra_pages; > > - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); > > + int max = ra->ra_pages; /* max readahead pages */ > > + pgoff_t prev_offset; > > + int sequential; > > > > /* > > * It's the expected callback offset, assume sequential access. > > @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space > > goto readit; > > } > > > > + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; > > + sequential = offset - prev_offset <= 1UL || req_size > max; > > It's a bit pointless using an opaque type for prev_offset here, and then > encoding the knowledge that it is implemented as "unsigned long". > > It's a minor thing, but perhaps just "<= 1" would make more sense here. Yeah, "<= 1" is OK. But the expression still requires pgoff_t to be 'unsigned' to work correctly. So what about "<= 1U"? ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos 2007-07-24 4:32 ` Fengguang Wu @ 2007-07-24 4:53 ` Andrew Morton [not found] ` <20070724062744.GA6686@mail.ustc.edu.cn> 0 siblings, 1 reply; 18+ messages in thread From: Andrew Morton @ 2007-07-24 4:53 UTC (permalink / raw) To: Fengguang Wu; +Cc: linux-kernel, Peter Zijlstra On Tue, 24 Jul 2007 12:32:15 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > On Mon, Jul 23, 2007 at 08:55:35PM -0700, Andrew Morton wrote: > > On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > > > @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space > > > bool hit_readahead_marker, pgoff_t offset, > > > unsigned long req_size) > > > { > > > - int max; /* max readahead pages */ > > > - int sequential; > > > - > > > - max = ra->ra_pages; > > > - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); > > > + int max = ra->ra_pages; /* max readahead pages */ > > > + pgoff_t prev_offset; > > > + int sequential; > > > > > > /* > > > * It's the expected callback offset, assume sequential access. > > > @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space > > > goto readit; > > > } > > > > > > + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; > > > + sequential = offset - prev_offset <= 1UL || req_size > max; > > > > It's a bit pointless using an opaque type for prev_offset here, and then > > encoding the knowledge that it is implemented as "unsigned long". > > > > It's a minor thing, but perhaps just "<= 1" would make more sense here. > > Yeah, "<= 1" is OK. But the expression still requires pgoff_t to be > 'unsigned' to work correctly. > > So what about "<= 1U"? umm, if one really cared one could do <expr> == 1 || <expr> == 0 or something. But whatever - let's leave it as-is. ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724062744.GA6686@mail.ustc.edu.cn>]
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos [not found] ` <20070724062744.GA6686@mail.ustc.edu.cn> @ 2007-07-24 6:27 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 6:27 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Peter Zijlstra On Mon, Jul 23, 2007 at 09:53:10PM -0700, Andrew Morton wrote: > On Tue, 24 Jul 2007 12:32:15 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > On Mon, Jul 23, 2007 at 08:55:35PM -0700, Andrew Morton wrote: > > > On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > > > > > @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space > > > > bool hit_readahead_marker, pgoff_t offset, > > > > unsigned long req_size) > > > > { > > > > - int max; /* max readahead pages */ > > > > - int sequential; > > > > - > > > > - max = ra->ra_pages; > > > > - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); > > > > + int max = ra->ra_pages; /* max readahead pages */ > > > > + pgoff_t prev_offset; > > > > + int sequential; > > > > > > > > /* > > > > * It's the expected callback offset, assume sequential access. > > > > @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space > > > > goto readit; > > > > } > > > > > > > > + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; > > > > + sequential = offset - prev_offset <= 1UL || req_size > max; > > > > > > It's a bit pointless using an opaque type for prev_offset here, and then > > > encoding the knowledge that it is implemented as "unsigned long". > > > > > > It's a minor thing, but perhaps just "<= 1" would make more sense here. > > > > Yeah, "<= 1" is OK. But the expression still requires pgoff_t to be > > 'unsigned' to work correctly. > > > > So what about "<= 1U"? > > umm, if one really cared one could do > > <expr> == 1 || <expr> == 0 Yeah, I'd prefer this if we are to change it. > or something. But whatever - let's leave it as-is. ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724043708.GA6627@mail.ustc.edu.cn>]
* Re: [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos [not found] ` <20070724043708.GA6627@mail.ustc.edu.cn> @ 2007-07-24 4:37 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 4:37 UTC (permalink / raw) To: Andrew Morton, linux-kernel, Peter Zijlstra On Tue, Jul 24, 2007 at 12:32:15PM +0800, Fengguang Wu wrote: > On Mon, Jul 23, 2007 at 08:55:35PM -0700, Andrew Morton wrote: > > On Tue, 24 Jul 2007 10:00:12 +0800 Fengguang Wu <wfg@mail.ustc.edu.cn> wrote: > > > > > @@ -342,11 +342,9 @@ ondemand_readahead(struct address_space > > > bool hit_readahead_marker, pgoff_t offset, > > > unsigned long req_size) > > > { > > > - int max; /* max readahead pages */ > > > - int sequential; > > > - > > > - max = ra->ra_pages; > > > - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); > > > + int max = ra->ra_pages; /* max readahead pages */ > > > + pgoff_t prev_offset; > > > + int sequential; > > > > > > /* > > > * It's the expected callback offset, assume sequential access. > > > @@ -360,6 +358,9 @@ ondemand_readahead(struct address_space > > > goto readit; > > > } > > > > > > + prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT; > > > + sequential = offset - prev_offset <= 1UL || req_size > max; > > > > It's a bit pointless using an opaque type for prev_offset here, and then > > encoding the knowledge that it is implemented as "unsigned long". > > > > It's a minor thing, but perhaps just "<= 1" would make more sense here. > > Yeah, "<= 1" is OK. But the expression still requires pgoff_t to be > 'unsigned' to work correctly. > > So what about "<= 1U"? I wrote a test case and find that if pgoff_t is 'signed long', "<= 1U" still yields the wrong result. Only "<= 1UL" does the trick :( ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.319225909@mail.ustc.edu.cn>]
* [PATCH 04/10] radixtree: introduce radix_tree_scan_hole() [not found] ` <20070724020042.319225909@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Nick Piggin [-- Attachment #1: radixtree-introduce-scan-hole-data-functions.patch --] [-- Type: text/plain, Size: 2552 bytes --] Introduce radix_tree_scan_hole(root, index, max_scan) to scan radix tree for the first hole. It will be used in interleaved readahead. The implementation is dumb and obviously correct. It can help debug(and document) the possible smart one in future. Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/radix-tree.h | 2 + lib/radix-tree.c | 36 +++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) --- linux-2.6.22-rc6-mm1.orig/include/linux/radix-tree.h +++ linux-2.6.22-rc6-mm1/include/linux/radix-tree.h @@ -155,6 +155,8 @@ void *radix_tree_delete(struct radix_tre unsigned int radix_tree_gang_lookup(struct radix_tree_root *root, void **results, unsigned long first_index, unsigned int max_items); +unsigned long radix_tree_next_hole(struct radix_tree_root *root, + unsigned long index, unsigned long max_scan); int radix_tree_preload(gfp_t gfp_mask); void radix_tree_init(void); void *radix_tree_tag_set(struct radix_tree_root *root, --- linux-2.6.22-rc6-mm1.orig/lib/radix-tree.c +++ linux-2.6.22-rc6-mm1/lib/radix-tree.c @@ -601,6 +601,42 @@ int radix_tree_tag_get(struct radix_tree EXPORT_SYMBOL(radix_tree_tag_get); #endif +/** + * radix_tree_next_hole - find the next hole (not-present entry) + * @root: tree root + * @index: index key + * @max_scan: maximum range to search + * + * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the lowest + * indexed hole. + * + * Returns: the index of the hole if found, otherwise returns an index + * outside of the set specified (in which case 'return - index >= max_scan' + * will be true). + * + * radix_tree_next_hole may be called under rcu_read_lock. However, like + * radix_tree_gang_lookup, this will not atomically search a snapshot of the + * tree at a single point in time. For example, if a hole is created at index + * 5, then subsequently a hole is created at index 10, radix_tree_next_hole + * covering both indexes may return 10 if called under rcu_read_lock. + */ +unsigned long radix_tree_next_hole(struct radix_tree_root *root, + unsigned long index, unsigned long max_scan) +{ + unsigned long i; + + for (i = 0; i < max_scan; i++) { + if (!radix_tree_lookup(root, index)) + break; + index++; + if (index == 0) + break; + } + + return index; +} +EXPORT_SYMBOL(radix_tree_next_hole); + static unsigned int __lookup(struct radix_tree_node *slot, void **results, unsigned long index, unsigned int max_items, unsigned long *next_index) -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.426486651@mail.ustc.edu.cn>]
* [PATCH 05/10] readahead: basic support of interleaved reads [not found] ` <20070724020042.426486651@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Rusty Russell [-- Attachment #1: readahead-interleaved-reads.patch --] [-- Type: text/plain, Size: 3282 bytes --] This is a simplified version of the pagecache context based readahead. It handles the case of multiple threads reading on the same fd and invalidating each others' readahead state. It does the trick by scanning the pagecache and recovering the current read stream's readahead status. The algorithm works in a opportunistic way, in that it do not try to detect interleaved reads _actively_, which requires a probe into the page cache(which means a little more overheads for random reads). It only tries to handle a previously started sequential readahead whose state was overwritten by another concurrent stream, and it can do this job pretty well. Negative and positive examples(or what you can expect from it): 1) it cannot detect and serve perfect request-by-request interleaved reads right: time stream 1 stream 2 0 1 1 1001 2 2 3 1002 4 3 5 1003 6 4 7 1004 8 5 9 1005 Here no single readahead will be carried out. 2) However, if it's two concurrent reads by two threads, the chance of the initial sequential readahead be started is huge. Once the first sequential readahead is started for a stream, this patch will ensure that the readahead window continues to rampup and won't be disturbed by other streams. time stream 1 stream 2 0 1 1 2 2 1001 3 3 4 1002 5 1003 6 4 7 5 8 1004 9 6 10 1005 11 7 12 1006 13 1007 Here steam 1 will start a readahead at page 2, and stream 2 will start its first readahead at page 1003. From then on the two streams will be served right. Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/readahead.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -371,6 +371,29 @@ ondemand_readahead(struct address_space } /* + * Hit a marked page without valid readahead state. + * E.g. interleaved reads. + * Query the pagecache for async_size, which normally equals to + * readahead size. Ramp it up and use it as the new readahead size. + */ + if (hit_readahead_marker) { + pgoff_t start; + + read_lock_irq(&mapping->tree_lock); + start = radix_tree_next_hole(&mapping->page_tree, offset, max+1); + read_unlock_irq(&mapping->tree_lock); + + if (!start || start - offset > max) + return 0; + + ra->start = start; + ra->size = start - offset; /* old async_size */ + ra->size = get_next_ra_size(ra, max); + ra->async_size = ra->size; + goto readit; + } + + /* * It may be one of * - first read on start of file * - sequential cache miss @@ -381,16 +404,6 @@ ondemand_readahead(struct address_space ra->size = get_init_ra_size(req_size, max); ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; - /* - * Hit on a marked page without valid readahead state. - * E.g. interleaved reads. - * Not knowing its readahead pos/size, bet on the minimal possible one. - */ - if (hit_readahead_marker) { - ra->start++; - ra->size = get_next_ra_size(ra, max); - } - readit: return ra_submit(ra, mapping, filp); } -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.588573597@mail.ustc.edu.cn>]
* [PATCH 06/10] readahead: remove the local copy of ra in do_generic_mapping_read() [not found] ` <20070724020042.588573597@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Nick Piggin [-- Attachment #1: remove-duplicate-ra.patch --] [-- Type: text/plain, Size: 2542 bytes --] The local copy of ra in do_generic_mapping_read() can now go away. It predates readanead(req_size). In a time when the readahead code was called on *every* single page. Hence a local has to be made to reduce the chance of the readahead state being overwritten by a concurrent reader. More details in: Linux: Random File I/O Regressions In 2.6 <http://kerneltrap.org/node/3039> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/filemap.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -863,7 +863,7 @@ static void shrink_readahead_size_eio(st * It may be NULL. */ void do_generic_mapping_read(struct address_space *mapping, - struct file_ra_state *_ra, + struct file_ra_state *ra, struct file *filp, loff_t *ppos, read_descriptor_t *desc, @@ -877,12 +877,11 @@ void do_generic_mapping_read(struct addr unsigned long prev_index; unsigned int prev_offset; int error; - struct file_ra_state ra = *_ra; index = *ppos >> PAGE_CACHE_SHIFT; next_index = index; - prev_index = ra.prev_pos >> PAGE_CACHE_SHIFT; - prev_offset = ra.prev_pos & (PAGE_CACHE_SIZE-1); + prev_index = ra->prev_pos >> PAGE_CACHE_SHIFT; + prev_offset = ra->prev_pos & (PAGE_CACHE_SIZE-1); last_index = (*ppos + desc->count + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; offset = *ppos & ~PAGE_CACHE_MASK; @@ -897,7 +896,7 @@ find_page: page = find_get_page(mapping, index); if (!page) { page_cache_sync_readahead(mapping, - &ra, filp, + ra, filp, index, last_index - index); page = find_get_page(mapping, index); if (unlikely(page == NULL)) @@ -905,7 +904,7 @@ find_page: } if (PageReadahead(page)) { page_cache_async_readahead(mapping, - &ra, filp, page, + ra, filp, page, index, last_index - index); } if (!PageUptodate(page)) @@ -1016,7 +1015,7 @@ readpage: } unlock_page(page); error = -EIO; - shrink_readahead_size_eio(filp, &ra); + shrink_readahead_size_eio(filp, ra); goto readpage_error; } unlock_page(page); @@ -1053,10 +1052,9 @@ no_cached_page: } out: - *_ra = ra; - _ra->prev_pos = prev_index; - _ra->prev_pos <<= PAGE_CACHE_SHIFT; - _ra->prev_pos |= prev_offset; + ra->prev_pos = prev_index; + ra->prev_pos <<= PAGE_CACHE_SHIFT; + ra->prev_pos |= prev_offset; *ppos = ((loff_t) index << PAGE_CACHE_SHIFT) + offset; if (filp) -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.758542876@mail.ustc.edu.cn>]
* [PATCH 07/10] readahead: remove several readahead macros [not found] ` <20070724020042.758542876@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel [-- Attachment #1: readahead-macros-cleanup.patch --] [-- Type: text/plain, Size: 1523 bytes --] Remove VM_MAX_CACHE_HIT, MAX_RA_PAGES and MIN_RA_PAGES. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/mm.h | 2 -- mm/readahead.c | 10 +--------- 2 files changed, 1 insertion(+), 11 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/mm.h +++ linux-2.6.22-rc6-mm1/include/linux/mm.h @@ -1148,8 +1148,6 @@ int write_one_page(struct page *page, in /* readahead.c */ #define VM_MAX_READAHEAD 128 /* kbytes */ #define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */ -#define VM_MAX_CACHE_HIT 256 /* max pages in a row in cache before - * turning readahead off */ int do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read); --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -21,16 +21,8 @@ void default_unplug_io_fn(struct backing } EXPORT_SYMBOL(default_unplug_io_fn); -/* - * Convienent macros for min/max read-ahead pages. - * Note that MAX_RA_PAGES is rounded down, while MIN_RA_PAGES is rounded up. - * The latter is necessary for systems with large page size(i.e. 64k). - */ -#define MAX_RA_PAGES (VM_MAX_READAHEAD*1024 / PAGE_CACHE_SIZE) -#define MIN_RA_PAGES DIV_ROUND_UP(VM_MIN_READAHEAD*1024, PAGE_CACHE_SIZE) - struct backing_dev_info default_backing_dev_info = { - .ra_pages = MAX_RA_PAGES, + .ra_pages = VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE, .state = 0, .capabilities = BDI_CAP_MAP_COPY, .unplug_io_fn = default_unplug_io_fn, -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020042.882116065@mail.ustc.edu.cn>]
* [PATCH 08/10] readahead: remove the limit max_sectors_kb imposed on max_readahead_kb [not found] ` <20070724020042.882116065@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, Jens Axboe [-- Attachment #1: remove-readahead-size-limit.patch --] [-- Type: text/plain, Size: 1300 bytes --] Remove the size limit max_sectors_kb imposed on max_readahead_kb. The size restriction is unreasonable. Especially when max_sectors_kb cannot grow larger than max_hw_sectors_kb, which can be rather small for some disk drives. Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> Acked-by: Jens Axboe <jens.axboe@oracle.com> --- block/ll_rw_blk.c | 9 --------- 1 file changed, 9 deletions(-) --- linux-2.6.22-rc6-mm1.orig/block/ll_rw_blk.c +++ linux-2.6.22-rc6-mm1/block/ll_rw_blk.c @@ -3945,7 +3945,6 @@ queue_max_sectors_store(struct request_q max_hw_sectors_kb = q->max_hw_sectors >> 1, page_kb = 1 << (PAGE_CACHE_SHIFT - 10); ssize_t ret = queue_var_store(&max_sectors_kb, page, count); - int ra_kb; if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb) return -EINVAL; @@ -3954,14 +3953,6 @@ queue_max_sectors_store(struct request_q * values synchronously: */ spin_lock_irq(q->queue_lock); - /* - * Trim readahead window as well, if necessary: - */ - ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10); - if (ra_kb > max_sectors_kb) - q->backing_dev_info.ra_pages = - max_sectors_kb >> (PAGE_CACHE_SHIFT - 10); - q->max_sectors = max_sectors_kb << 1; spin_unlock_irq(q->queue_lock); -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020043.064974174@mail.ustc.edu.cn>]
* [PATCH 09/10] filemap: trivial code cleanups [not found] ` <20070724020043.064974174@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel [-- Attachment #1: cleanup-filemap.patch --] [-- Type: text/plain, Size: 1453 bytes --] - remove unused local next_index in do_generic_mapping_read() - wrap a long line - remove a redudant page_cache_read() declaration Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/filemap.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -873,13 +873,11 @@ void do_generic_mapping_read(struct addr unsigned long index; unsigned long offset; unsigned long last_index; - unsigned long next_index; unsigned long prev_index; unsigned int prev_offset; int error; index = *ppos >> PAGE_CACHE_SHIFT; - next_index = index; prev_index = ra->prev_pos >> PAGE_CACHE_SHIFT; prev_offset = ra->prev_pos & (PAGE_CACHE_SIZE-1); last_index = (*ppos + desc->count + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; @@ -1214,7 +1212,8 @@ out: } EXPORT_SYMBOL(generic_file_aio_read); -int file_send_actor(read_descriptor_t * desc, struct page *page, unsigned long offset, unsigned long size) +int file_send_actor(read_descriptor_t * desc, struct page *page, + unsigned long offset, unsigned long size) { ssize_t written; unsigned long count = desc->count; @@ -1287,7 +1286,6 @@ asmlinkage ssize_t sys_readahead(int fd, } #ifdef CONFIG_MMU -static int FASTCALL(page_cache_read(struct file * file, unsigned long offset)); /** * page_cache_read - adds requested page to the page cache if not already there * @file: file to read -- ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20070724020043.189132028@mail.ustc.edu.cn>]
* [PATCH 10/10] filemap: convert some unsigned long to pgoff_t [not found] ` <20070724020043.189132028@mail.ustc.edu.cn> @ 2007-07-24 2:00 ` Fengguang Wu 0 siblings, 0 replies; 18+ messages in thread From: Fengguang Wu @ 2007-07-24 2:00 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel [-- Attachment #1: pgoff_t.patch --] [-- Type: text/plain, Size: 7072 bytes --] Convert some 'unsigned long' to pgoff_t. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/pagemap.h | 23 ++++++++++++----------- mm/filemap.c | 32 ++++++++++++++++---------------- 2 files changed, 28 insertions(+), 27 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/pagemap.h +++ linux-2.6.22-rc6-mm1/include/linux/pagemap.h @@ -83,11 +83,11 @@ static inline struct page *page_cache_al typedef int filler_t(void *, struct page *); extern struct page * find_get_page(struct address_space *mapping, - unsigned long index); + pgoff_t index); extern struct page * find_lock_page(struct address_space *mapping, - unsigned long index); + pgoff_t index); extern struct page * find_or_create_page(struct address_space *mapping, - unsigned long index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp_mask); unsigned find_get_pages(struct address_space *mapping, pgoff_t start, unsigned int nr_pages, struct page **pages); unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start, @@ -100,41 +100,42 @@ struct page *__grab_cache_page(struct ad /* * Returns locked page at given index in given cache, creating it if needed. */ -static inline struct page *grab_cache_page(struct address_space *mapping, unsigned long index) +static inline struct page *grab_cache_page(struct address_space *mapping, + pgoff_t index) { return find_or_create_page(mapping, index, mapping_gfp_mask(mapping)); } extern struct page * grab_cache_page_nowait(struct address_space *mapping, - unsigned long index); + pgoff_t index); extern struct page * read_cache_page_async(struct address_space *mapping, - unsigned long index, filler_t *filler, + pgoff_t index, filler_t *filler, void *data); extern struct page * read_cache_page(struct address_space *mapping, - unsigned long index, filler_t *filler, + pgoff_t index, filler_t *filler, void *data); extern int read_cache_pages(struct address_space *mapping, struct list_head *pages, filler_t *filler, void *data); static inline struct page *read_mapping_page_async( struct address_space *mapping, - unsigned long index, void *data) + pgoff_t index, void *data) { filler_t *filler = (filler_t *)mapping->a_ops->readpage; return read_cache_page_async(mapping, index, filler, data); } static inline struct page *read_mapping_page(struct address_space *mapping, - unsigned long index, void *data) + pgoff_t index, void *data) { filler_t *filler = (filler_t *)mapping->a_ops->readpage; return read_cache_page(mapping, index, filler, data); } int add_to_page_cache(struct page *page, struct address_space *mapping, - unsigned long index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp_mask); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - unsigned long index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp_mask); extern void remove_from_page_cache(struct page *page); extern void __remove_from_page_cache(struct page *page); --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -594,7 +594,7 @@ void fastcall __lock_page_nosync(struct * Is there a pagecache struct page at the given (mapping, offset) tuple? * If yes, increment its refcount and return it; if no, return NULL. */ -struct page * find_get_page(struct address_space *mapping, unsigned long offset) +struct page * find_get_page(struct address_space *mapping, pgoff_t offset) { struct page *page; @@ -618,7 +618,7 @@ EXPORT_SYMBOL(find_get_page); * Returns zero if the page was not present. find_lock_page() may sleep. */ struct page *find_lock_page(struct address_space *mapping, - unsigned long offset) + pgoff_t offset) { struct page *page; @@ -664,7 +664,7 @@ EXPORT_SYMBOL(find_lock_page); * memory exhaustion. */ struct page *find_or_create_page(struct address_space *mapping, - unsigned long index, gfp_t gfp_mask) + pgoff_t index, gfp_t gfp_mask) { struct page *page; int err; @@ -794,7 +794,7 @@ EXPORT_SYMBOL(find_get_pages_tag); * and deadlock against the caller's locked page. */ struct page * -grab_cache_page_nowait(struct address_space *mapping, unsigned long index) +grab_cache_page_nowait(struct address_space *mapping, pgoff_t index) { struct page *page = find_get_page(mapping, index); @@ -870,10 +870,10 @@ void do_generic_mapping_read(struct addr read_actor_t actor) { struct inode *inode = mapping->host; - unsigned long index; - unsigned long offset; - unsigned long last_index; - unsigned long prev_index; + pgoff_t index; + pgoff_t last_index; + pgoff_t prev_index; + unsigned long offset; /* offset into pagecache page */ unsigned int prev_offset; int error; @@ -885,7 +885,7 @@ void do_generic_mapping_read(struct addr for (;;) { struct page *page; - unsigned long end_index; + pgoff_t end_index; loff_t isize; unsigned long nr, ret; @@ -1255,7 +1255,7 @@ EXPORT_SYMBOL(generic_file_sendfile); static ssize_t do_readahead(struct address_space *mapping, struct file *filp, - unsigned long index, unsigned long nr) + pgoff_t index, unsigned long nr) { if (!mapping || !mapping->a_ops || !mapping->a_ops->readpage) return -EINVAL; @@ -1275,8 +1275,8 @@ asmlinkage ssize_t sys_readahead(int fd, if (file) { if (file->f_mode & FMODE_READ) { struct address_space *mapping = file->f_mapping; - unsigned long start = offset >> PAGE_CACHE_SHIFT; - unsigned long end = (offset + count - 1) >> PAGE_CACHE_SHIFT; + pgoff_t start = offset >> PAGE_CACHE_SHIFT; + pgoff_t end = (offset + count - 1) >> PAGE_CACHE_SHIFT; unsigned long len = end - start + 1; ret = do_readahead(mapping, file, start, len); } @@ -1294,7 +1294,7 @@ asmlinkage ssize_t sys_readahead(int fd, * This adds the requested page to the page cache if it isn't already there, * and schedules an I/O to read in its contents from disk. */ -static int fastcall page_cache_read(struct file * file, unsigned long offset) +static int fastcall page_cache_read(struct file * file, pgoff_t offset) { struct address_space *mapping = file->f_mapping; struct page *page; @@ -1540,7 +1540,7 @@ EXPORT_SYMBOL(generic_file_mmap); EXPORT_SYMBOL(generic_file_readonly_mmap); static struct page *__read_cache_page(struct address_space *mapping, - unsigned long index, + pgoff_t index, int (*filler)(void *,struct page*), void *data) { @@ -1574,7 +1574,7 @@ repeat: * after submitting it to the filler. */ struct page *read_cache_page_async(struct address_space *mapping, - unsigned long index, + pgoff_t index, int (*filler)(void *,struct page*), void *data) { @@ -1623,7 +1623,7 @@ EXPORT_SYMBOL(read_cache_page_async); * If the page does not get brought uptodate, return -EIO. */ struct page *read_cache_page(struct address_space *mapping, - unsigned long index, + pgoff_t index, int (*filler)(void *,struct page*), void *data) { -- ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2007-07-24 6:39 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20070724020009.677809022@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 00/10] readahead cleanups and interleaved readahead take 4 Fengguang Wu
[not found] ` <20070724020041.774421091@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 01/10] readahead: compacting file_ra_state Fengguang Wu
[not found] ` <20070724020042.028909529@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 02/10] readahead: mmap read-around simplification Fengguang Wu
[not found] ` <20070724020042.135275161@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 03/10] readahead: combine file_ra_state.prev_index/prev_offset into prev_pos Fengguang Wu
2007-07-24 3:52 ` Andrew Morton
[not found] ` <20070724034801.GA7310@mail.ustc.edu.cn>
2007-07-24 3:48 ` Fengguang Wu
2007-07-24 3:55 ` Andrew Morton
[not found] ` <20070724043215.GA6317@mail.ustc.edu.cn>
2007-07-24 4:32 ` Fengguang Wu
2007-07-24 4:53 ` Andrew Morton
[not found] ` <20070724062744.GA6686@mail.ustc.edu.cn>
2007-07-24 6:27 ` Fengguang Wu
[not found] ` <20070724043708.GA6627@mail.ustc.edu.cn>
2007-07-24 4:37 ` Fengguang Wu
[not found] ` <20070724020042.319225909@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 04/10] radixtree: introduce radix_tree_scan_hole() Fengguang Wu
[not found] ` <20070724020042.426486651@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 05/10] readahead: basic support of interleaved reads Fengguang Wu
[not found] ` <20070724020042.588573597@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 06/10] readahead: remove the local copy of ra in do_generic_mapping_read() Fengguang Wu
[not found] ` <20070724020042.758542876@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 07/10] readahead: remove several readahead macros Fengguang Wu
[not found] ` <20070724020042.882116065@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 08/10] readahead: remove the limit max_sectors_kb imposed on max_readahead_kb Fengguang Wu
[not found] ` <20070724020043.064974174@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 09/10] filemap: trivial code cleanups Fengguang Wu
[not found] ` <20070724020043.189132028@mail.ustc.edu.cn>
2007-07-24 2:00 ` [PATCH 10/10] filemap: convert some unsigned long to pgoff_t Fengguang Wu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox