* [PATCH 0/6] readahead cleanups and interleaved readahead [not found] <20070720100740.106917381@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu [not found] ` <20070720101123.936159499@mail.ustc.edu.cn> ` (5 subsequent siblings) 6 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel Andrew, The following readahead updates have been tested and should be OK for 2.6.23 :-) smaller file_ra_state: [PATCH 1/6] compacting file_ra_state [PATCH 2/6] mmap read-around simplification code cleanups: [PATCH 3/6] remove several readahead macros [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb support of interleaved reads: [PATCH 5/6] introduce radix_tree_scan_hole() [PATCH 6/6] basic support of interleaved reads The diffstat is block/ll_rw_blk.c | 9 ------ include/linux/fs.h | 13 ++++----- include/linux/mm.h | 2 - include/linux/radix-tree.h | 2 + lib/radix-tree.c | 34 ++++++++++++++++++++++++ mm/filemap.c | 6 ++-- mm/readahead.c | 48 +++++++++++++++++++---------------- 7 files changed, 72 insertions(+), 42 deletions(-) Regards, Fengguang Wu ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101123.936159499@mail.ustc.edu.cn>]
* [PATCH 1/6] compacting file_ra_state [not found] ` <20070720101123.936159499@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu [not found] ` <p73y7hbux4d.fsf@bingen.suse.de> [not found] ` <1184927207.20032.168.camel@twins> 1 sibling, 1 reply; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: short-rasize.patch --] [-- Type: text/plain, Size: 2487 bytes --] Use 'unsigned int' instead of 'unsigned long' for the readahead indexes/sizes. This helps reduce memory consumption on 64bit CPU when a lot of files are opened. Note that the (smaller) 32bit index can support up to 16PB file. Which should be sufficient large at least for now. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/fs.h | 10 +++++----- mm/filemap.c | 2 +- mm/readahead.c | 5 +++-- 3 files changed, 9 insertions(+), 8 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/fs.h +++ linux-2.6.22-rc6-mm1/include/linux/fs.h @@ -771,16 +771,16 @@ struct fown_struct { * Track a single file's readahead state */ struct file_ra_state { - pgoff_t start; /* where readahead started */ - unsigned long size; /* # of readahead pages */ - unsigned long async_size; /* do asynchronous readahead when + unsigned int start; /* where readahead started */ + unsigned int size; /* # of readahead pages */ + unsigned int async_size; /* do asynchronous readahead when there are only # of pages ahead */ - unsigned long ra_pages; /* Maximum readahead window */ + unsigned int ra_pages; /* Maximum readahead window */ unsigned long mmap_hit; /* Cache hit stat for mmap accesses */ unsigned long mmap_miss; /* Cache miss stat for mmap accesses */ - unsigned long prev_index; /* Cache last read() position */ unsigned int prev_offset; /* Offset where last read() ended in a page */ + unsigned int prev_index; /* Cache last read() position */ }; /* --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -840,7 +840,7 @@ static void shrink_readahead_size_eio(st if (count > 5) return; count++; - printk(KERN_WARNING "Reducing readahead size to %luK\n", + printk(KERN_WARNING "Reducing readahead size to %dK\n", ra->ra_pages << (PAGE_CACHE_SHIFT - 10)); } --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -342,11 +342,12 @@ ondemand_readahead(struct address_space bool hit_readahead_marker, pgoff_t offset, unsigned long req_size) { - unsigned long max; /* max readahead pages */ + int max; /* max readahead pages */ int sequential; max = ra->ra_pages; - sequential = (offset - ra->prev_index <= 1UL) || (req_size > max); + sequential = ((unsigned int)offset - ra->prev_index <= 1UL) || + (req_size > max); /* * It's the expected callback offset, assume sequential access. -- ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <p73y7hbux4d.fsf@bingen.suse.de>]
[parent not found: <20070720121143.GA8584@mail.ustc.edu.cn>]
* Re: [PATCH 1/6] compacting file_ra_state [not found] ` <20070720121143.GA8584@mail.ustc.edu.cn> @ 2007-07-20 12:11 ` Fengguang Wu 2007-07-20 16:02 ` Linus Torvalds 0 siblings, 1 reply; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 12:11 UTC (permalink / raw) To: Andi Kleen; +Cc: Andrew Morton, Linus Torvalds, linux-kernel On Fri, Jul 20, 2007 at 01:34:42PM +0200, Andi Kleen wrote: > Fengguang Wu <wfg@mail.ustc.edu.cn> writes: > > > Use 'unsigned int' instead of 'unsigned long' for the readahead indexes/sizes. > > > > This helps reduce memory consumption on 64bit CPU when > > a lot of files are opened. > > > > Note that the (smaller) 32bit index can support up to 16PB file. ~~~~ sorry, it's 16TB ;) > > Which should be sufficient large at least for now. > > This would add a new limit to 64bit architectures. Surely keeping > start at pgoff_t will not be a big issue? The other fields can be > 32bit. Yeah, it counts for about 4MB memory for 1M opened files. But, the filp size is now 296 on x86_64, so slab-objects-per-page = 13. Adding another 4bytes, it remains in 13; Taking 4bytes more, it increases to 14. So pgoff_t consumes no more memory actually. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/6] compacting file_ra_state 2007-07-20 12:11 ` Fengguang Wu @ 2007-07-20 16:02 ` Linus Torvalds 0 siblings, 0 replies; 13+ messages in thread From: Linus Torvalds @ 2007-07-20 16:02 UTC (permalink / raw) To: Fengguang Wu; +Cc: Andi Kleen, Andrew Morton, linux-kernel On Fri, 20 Jul 2007, Fengguang Wu wrote: > On Fri, Jul 20, 2007 at 01:34:42PM +0200, Andi Kleen wrote: > > > > This would add a new limit to 64bit architectures. Surely keeping > > start at pgoff_t will not be a big issue? The other fields can be > > 32bit. > > Yeah, it counts for about 4MB memory for 1M opened files. I'd suggest keeping "start" as pgoff_t, and doing what Peter Zijlstra suggested about prev_offset / prev_index and turning that combination into a "loff_t". The latter will cause some more 64-bit operations, but by now I think we can start thinking of 64 bits as the norm, and it will be more logical, methinks. Then, when/if we add a config option to turn pgoff_t into a 32-bit entry for us normal people who think that 16TB is plenty (and would rather have smaller and faster harddisks than go beyond it), that will also solve the "start" issue. Does that sound like a plan? So I'm going to ignore the current series in the hopes that we'll have a new one soon enough. Linus ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <1184927207.20032.168.camel@twins>]
* Re: [PATCH 1/6] compacting file_ra_state [not found] ` <1184927207.20032.168.camel@twins> @ 2007-07-20 10:48 ` Peter Zijlstra [not found] ` <20070720112409.GB6300@mail.ustc.edu.cn> 0 siblings, 1 reply; 13+ messages in thread From: Peter Zijlstra @ 2007-07-20 10:48 UTC (permalink / raw) To: Fengguang Wu; +Cc: Andrew Morton, Linus Torvalds, linux-kernel On Fri, 2007-07-20 at 12:26 +0200, Peter Zijlstra wrote: > On Fri, 2007-07-20 at 18:07 +0800, Fengguang Wu wrote: > > plain text document attachment (short-rasize.patch) > > Use 'unsigned int' instead of 'unsigned long' for the readahead indexes/sizes. > > > > This helps reduce memory consumption on 64bit CPU when > > a lot of files are opened. > > > > Note that the (smaller) 32bit index can support up to 16PB file. > > Which should be sufficient large at least for now. > > Perhaps merge prev_offset and prev_index into a pgoff_t prev? That > should give the same savings on 64bit and be more correct on 32bit. s/pgoff_t/loff_t/ (and how come lkml was not on the CC list?) ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720112409.GB6300@mail.ustc.edu.cn>]
* Re: [PATCH 1/6] compacting file_ra_state [not found] ` <20070720112409.GB6300@mail.ustc.edu.cn> @ 2007-07-20 11:24 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 11:24 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Andrew Morton, Linus Torvalds, linux-kernel On Fri, Jul 20, 2007 at 12:48:59PM +0200, Peter Zijlstra wrote: > On Fri, 2007-07-20 at 12:26 +0200, Peter Zijlstra wrote: > > On Fri, 2007-07-20 at 18:07 +0800, Fengguang Wu wrote: > > > plain text document attachment (short-rasize.patch) > > > Use 'unsigned int' instead of 'unsigned long' for the readahead indexes/sizes. > > > > > > This helps reduce memory consumption on 64bit CPU when > > > a lot of files are opened. > > > > > > Note that the (smaller) 32bit index can support up to 16PB file. > > > Which should be sufficient large at least for now. > > > > Perhaps merge prev_offset and prev_index into a pgoff_t prev? That > > should give the same savings on 64bit and be more correct on 32bit. > > s/pgoff_t/loff_t/ Good idea! This could solve Andi's concern as well :) I'm coding it up, and sure it'll need some more tests... > (and how come lkml was not on the CC list?) :) ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101124.083676724@mail.ustc.edu.cn>]
* [PATCH 2/6] mmap read-around simplification [not found] ` <20070720101124.083676724@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: remove-mmap-hit.patch --] [-- Type: text/plain, Size: 1359 bytes --] Fold file_ra_state.mmap_hit into file_ra_state.mmap_miss and make it an int. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/fs.h | 3 +-- mm/filemap.c | 4 ++-- 2 files changed, 3 insertions(+), 4 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/fs.h +++ linux-2.6.22-rc6-mm1/include/linux/fs.h @@ -777,8 +777,7 @@ struct file_ra_state { there are only # of pages ahead */ unsigned int ra_pages; /* Maximum readahead window */ - unsigned long mmap_hit; /* Cache hit stat for mmap accesses */ - unsigned long mmap_miss; /* Cache miss stat for mmap accesses */ + int mmap_miss; /* Cache miss stat for mmap accesses */ unsigned int prev_offset; /* Offset where last read() ended in a page */ unsigned int prev_index; /* Cache last read() position */ }; --- linux-2.6.22-rc6-mm1.orig/mm/filemap.c +++ linux-2.6.22-rc6-mm1/mm/filemap.c @@ -1389,7 +1389,7 @@ retry_find: * Do we miss much more than hit in this file? If so, * stop bothering with read-ahead. It will only hurt. */ - if (ra->mmap_miss > ra->mmap_hit + MMAP_LOTSAMISS) + if (ra->mmap_miss > MMAP_LOTSAMISS) goto no_cached_page; /* @@ -1415,7 +1415,7 @@ retry_find: } if (!did_readaround) - ra->mmap_hit++; + ra->mmap_miss--; /* * We have a locked page in the page cache, now we need to check -- ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101124.221472134@mail.ustc.edu.cn>]
* [PATCH 3/6] remove several readahead macros [not found] ` <20070720101124.221472134@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: readahead-macros-cleanup.patch --] [-- Type: text/plain, Size: 1523 bytes --] Remove VM_MAX_CACHE_HIT, MAX_RA_PAGES and MIN_RA_PAGES. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/mm.h | 2 -- mm/readahead.c | 10 +--------- 2 files changed, 1 insertion(+), 11 deletions(-) --- linux-2.6.22-rc6-mm1.orig/include/linux/mm.h +++ linux-2.6.22-rc6-mm1/include/linux/mm.h @@ -1148,8 +1148,6 @@ int write_one_page(struct page *page, in /* readahead.c */ #define VM_MAX_READAHEAD 128 /* kbytes */ #define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */ -#define VM_MAX_CACHE_HIT 256 /* max pages in a row in cache before - * turning readahead off */ int do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read); --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -21,16 +21,8 @@ void default_unplug_io_fn(struct backing } EXPORT_SYMBOL(default_unplug_io_fn); -/* - * Convienent macros for min/max read-ahead pages. - * Note that MAX_RA_PAGES is rounded down, while MIN_RA_PAGES is rounded up. - * The latter is necessary for systems with large page size(i.e. 64k). - */ -#define MAX_RA_PAGES (VM_MAX_READAHEAD*1024 / PAGE_CACHE_SIZE) -#define MIN_RA_PAGES DIV_ROUND_UP(VM_MIN_READAHEAD*1024, PAGE_CACHE_SIZE) - struct backing_dev_info default_backing_dev_info = { - .ra_pages = MAX_RA_PAGES, + .ra_pages = VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE, .state = 0, .capabilities = BDI_CAP_MAP_COPY, .unplug_io_fn = default_unplug_io_fn, -- ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101124.470940735@mail.ustc.edu.cn>]
* [PATCH 5/6] introduce radix_tree_scan_hole() [not found] ` <20070720101124.470940735@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: radixtree-introduce-scan-hole-data-functions.patch --] [-- Type: text/plain, Size: 2215 bytes --] Introduce radix_tree_scan_hole(root, index, max_scan) to scan radix tree for the first hole. It will be used in interleaved readahead. The implementation is dumb and obviously correct. It can help debug the possible smart one in future. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- include/linux/radix-tree.h | 2 ++ lib/radix-tree.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 36 insertions(+) --- linux-2.6.22-rc6-mm1.orig/include/linux/radix-tree.h +++ linux-2.6.22-rc6-mm1/include/linux/radix-tree.h @@ -155,6 +155,8 @@ void *radix_tree_delete(struct radix_tre unsigned int radix_tree_gang_lookup(struct radix_tree_root *root, void **results, unsigned long first_index, unsigned int max_items); +unsigned long radix_tree_scan_hole(struct radix_tree_root *root, + unsigned long index, unsigned long max_scan); int radix_tree_preload(gfp_t gfp_mask); void radix_tree_init(void); void *radix_tree_tag_set(struct radix_tree_root *root, --- linux-2.6.22-rc6-mm1.orig/lib/radix-tree.c +++ linux-2.6.22-rc6-mm1/lib/radix-tree.c @@ -601,6 +601,40 @@ int radix_tree_tag_get(struct radix_tree EXPORT_SYMBOL(radix_tree_tag_get); #endif +static unsigned long +radix_tree_scan_hole_dumb(struct radix_tree_root *root, + unsigned long index, unsigned long max_scan) +{ + unsigned long i; + + for (i = 0; i < max_scan; i++) { + if (!radix_tree_lookup(root, index)) + break; + if (++index == 0) + break; + } + + return index; +} + +/** + * radix_tree_scan_hole - scan for hole + * @root: radix tree root + * @index: index key + * @max_scan: advice on max items to scan (it may scan a little more) + * + * Scan forward from @index for a hole/empty item, stop when + * - hit hole + * - wrap-around to index 0 + * - @max_scan or more items scanned + */ +unsigned long radix_tree_scan_hole(struct radix_tree_root *root, + unsigned long index, unsigned long max_scan) +{ + return radix_tree_scan_hole_dumb(root, index, max_scan); +} +EXPORT_SYMBOL(radix_tree_scan_hole); + static unsigned int __lookup(struct radix_tree_node *slot, void **results, unsigned long index, unsigned int max_items, unsigned long *next_index) -- ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101124.600141268@mail.ustc.edu.cn>]
* [PATCH 6/6] basic support of interleaved reads [not found] ` <20070720101124.600141268@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: readahead-interleaved-reads.patch --] [-- Type: text/plain, Size: 3240 bytes --] This is a simplified version of the pagecache context based readahead. It handles the case of multiple threads reading on the same fd and invalidating each others' readahead state. It does the trick by scanning the pagecache and recovering the current read stream's readahead status. The algorithm works in a opportunistic way, in that it do not try to detect interleaved reads _actively_, which requires a probe into the page cache(which means a little more overheads for random reads). It only tries to handle a previously started sequential readahead whose state was overwritten by another concurrent stream, and it can do this job pretty well. Negative and positive examples(or what you can expect from it): 1) it cannot detect and serve perfect request-by-request interleaved reads right: time stream 1 stream 2 0 1 1 1001 2 2 3 1002 4 3 5 1003 6 4 7 1004 8 5 9 1005 Here no single readahead will be carried out. 2) However, if it's two concurrent reads by two threads, the chance of the initial sequential readahead be started is huge. Once the first sequential readahead is started for a stream, this patch will ensure that the readahead window continues to rampup and won't be disturbed by other streams. time stream 1 stream 2 0 1 1 2 2 1001 3 3 4 1002 5 1003 6 4 7 5 8 1004 9 6 10 1005 11 7 12 1006 13 1007 Here steam 1 will start a readahead at page 2, and stream 2 will start its first readahead at page 1003. From then on the two streams will be served right. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/readahead.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) --- linux-2.6.22-rc6-mm1.orig/mm/readahead.c +++ linux-2.6.22-rc6-mm1/mm/readahead.c @@ -363,6 +363,29 @@ ondemand_readahead(struct address_space } /* + * Hit a marked page without valid readahead state. + * E.g. interleaved reads. + * Query the pagecache for async_size, which normally equals to + * readahead size. Ramp it up and use it as the new readahead size. + */ + if (hit_readahead_marker) { + pgoff_t start; + + read_lock_irq(&mapping->tree_lock); + start = radix_tree_scan_hole(&mapping->page_tree, offset, max+1); + read_unlock_irq(&mapping->tree_lock); + + if (!start || start - offset > max) + return 0; + + ra->start = start; + ra->size = start - offset; /* old async_size */ + ra->size = get_next_ra_size(ra, max); + ra->async_size = ra->size; + goto readit; + } + + /* * It may be one of * - first read on start of file * - sequential cache miss @@ -373,16 +396,6 @@ ondemand_readahead(struct address_space ra->size = get_init_ra_size(req_size, max); ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; - /* - * Hit on a marked page without valid readahead state. - * E.g. interleaved reads. - * Not knowing its readahead pos/size, bet on the minimal possible one. - */ - if (hit_readahead_marker) { - ra->start++; - ra->size = get_next_ra_size(ra, max); - } - readit: return ra_submit(ra, mapping, filp); } -- ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720101124.350754890@mail.ustc.edu.cn>]
* [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb [not found] ` <20070720101124.350754890@mail.ustc.edu.cn> @ 2007-07-20 10:07 ` Fengguang Wu 2007-07-20 11:07 ` Jens Axboe 1 sibling, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 10:07 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, linux-kernel [-- Attachment #1: remove-readahead-size-limit.patch --] [-- Type: text/plain, Size: 1162 bytes --] Remove the size limit max_sectors_kb imposed on max_readahead_kb. max_sectors_kb cannot grow larger than max_hw_sectors_kb, which can be rather small for some disk drives. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- block/ll_rw_blk.c | 9 --------- 1 file changed, 9 deletions(-) --- linux-2.6.22-rc6-mm1.orig/block/ll_rw_blk.c +++ linux-2.6.22-rc6-mm1/block/ll_rw_blk.c @@ -3945,7 +3945,6 @@ queue_max_sectors_store(struct request_q max_hw_sectors_kb = q->max_hw_sectors >> 1, page_kb = 1 << (PAGE_CACHE_SHIFT - 10); ssize_t ret = queue_var_store(&max_sectors_kb, page, count); - int ra_kb; if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb) return -EINVAL; @@ -3954,14 +3953,6 @@ queue_max_sectors_store(struct request_q * values synchronously: */ spin_lock_irq(q->queue_lock); - /* - * Trim readahead window as well, if necessary: - */ - ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10); - if (ra_kb > max_sectors_kb) - q->backing_dev_info.ra_pages = - max_sectors_kb >> (PAGE_CACHE_SHIFT - 10); - q->max_sectors = max_sectors_kb << 1; spin_unlock_irq(q->queue_lock); -- ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb [not found] ` <20070720101124.350754890@mail.ustc.edu.cn> 2007-07-20 10:07 ` [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb Fengguang Wu @ 2007-07-20 11:07 ` Jens Axboe [not found] ` <20070720112528.GC6300@mail.ustc.edu.cn> 1 sibling, 1 reply; 13+ messages in thread From: Jens Axboe @ 2007-07-20 11:07 UTC (permalink / raw) To: Fengguang Wu; +Cc: Andrew Morton, Linus Torvalds, linux-kernel On Fri, Jul 20 2007, Fengguang Wu wrote: > Remove the size limit max_sectors_kb imposed on max_readahead_kb. > > max_sectors_kb cannot grow larger than max_hw_sectors_kb, which can be > rather small for some disk drives. Please CC me on core block layer changes, thanks. The patch looks fine though! Your reasoning is a bit odd, though. I think the depency between the two is just wrong and ugly, so I agree it should go. Acked-by: Jens Axboe <jens.axboe@oracle.com> -- Jens Axboe ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20070720112528.GC6300@mail.ustc.edu.cn>]
* Re: [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb [not found] ` <20070720112528.GC6300@mail.ustc.edu.cn> @ 2007-07-20 11:25 ` Fengguang Wu 0 siblings, 0 replies; 13+ messages in thread From: Fengguang Wu @ 2007-07-20 11:25 UTC (permalink / raw) To: Jens Axboe; +Cc: Andrew Morton, Linus Torvalds, linux-kernel On Fri, Jul 20, 2007 at 01:07:43PM +0200, Jens Axboe wrote: > On Fri, Jul 20 2007, Fengguang Wu wrote: > > Remove the size limit max_sectors_kb imposed on max_readahead_kb. > > > > max_sectors_kb cannot grow larger than max_hw_sectors_kb, which can be > > rather small for some disk drives. > > Please CC me on core block layer changes, thanks. OK. > The patch looks fine though! Your reasoning is a bit odd, though. I > think the depency between the two is just wrong and ugly, so I agree it > should go. > > Acked-by: Jens Axboe <jens.axboe@oracle.com> Thank you :-) ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2007-07-20 16:03 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20070720100740.106917381@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 0/6] readahead cleanups and interleaved readahead Fengguang Wu
[not found] ` <20070720101123.936159499@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 1/6] compacting file_ra_state Fengguang Wu
[not found] ` <p73y7hbux4d.fsf@bingen.suse.de>
[not found] ` <20070720121143.GA8584@mail.ustc.edu.cn>
2007-07-20 12:11 ` Fengguang Wu
2007-07-20 16:02 ` Linus Torvalds
[not found] ` <1184927207.20032.168.camel@twins>
2007-07-20 10:48 ` Peter Zijlstra
[not found] ` <20070720112409.GB6300@mail.ustc.edu.cn>
2007-07-20 11:24 ` Fengguang Wu
[not found] ` <20070720101124.083676724@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 2/6] mmap read-around simplification Fengguang Wu
[not found] ` <20070720101124.221472134@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 3/6] remove several readahead macros Fengguang Wu
[not found] ` <20070720101124.470940735@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 5/6] introduce radix_tree_scan_hole() Fengguang Wu
[not found] ` <20070720101124.600141268@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 6/6] basic support of interleaved reads Fengguang Wu
[not found] ` <20070720101124.350754890@mail.ustc.edu.cn>
2007-07-20 10:07 ` [PATCH 4/6] remove the limit max_sectors_kb imposed on max_readahead_kb Fengguang Wu
2007-07-20 11:07 ` Jens Axboe
[not found] ` <20070720112528.GC6300@mail.ustc.edu.cn>
2007-07-20 11:25 ` Fengguang Wu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox