From: Wu Fengguang <wfg@mail.ustc.edu.cn>
To: Andrew Morton <akpm@osdl.org>
Cc: linux-kernel@vger.kernel.org, Wu Fengguang <wfg@mail.ustc.edu.cn>
Subject: [PATCH 17/33] readahead: context based method
Date: Wed, 24 May 2006 19:13:03 +0800 [thread overview]
Message-ID: <348469544.17438@ustc.edu.cn> (raw)
Message-ID: <20060524111905.586110688@localhost.localdomain> (raw)
In-Reply-To: 20060524111246.420010595@localhost.localdomain
[-- Attachment #1: readahead-method-context.patch --]
[-- Type: text/plain, Size: 17152 bytes --]
This is the slow code path of adaptive read-ahead.
No valid state info is available, so the page cache is queried to obtain
the required position/timing info. This kind of estimation is more conservative
than the stateful method, and also fluctuates more on load variance.
HOW IT WORKS
============
It works by peeking into the file cache and check if there are any history
pages present or accessed. In this way it can detect almost all forms of
sequential / semi-sequential read patterns, e.g.
- parallel / interleaved sequential scans on one file
- sequential reads across file open/close
- mixed sequential / random accesses
- sparse / skimming sequential read
HOW DATABASES CAN BENEFIT FROM IT
=================================
The adaptive readahead might help db performance in the following cases:
- concurrent sequential scans
- sequential scan on a fragmented table
- index scan with clustered matches
- index scan on majority rows (in case the planner goes wrong)
ALGORITHM STEPS
===============
- look back/forward to find the ra_index;
- look back to estimate a thrashing safe ra_size;
- assemble the next read-ahead request in file_ra_state;
- submit it.
ALGORITHM DYNAMICS
==================
* startup
When a sequential read is detected, chunk size is set to readahead-min
and grows up with each readahead. The grow speed is controlled by
readahead-ratio. When readahead-ratio == 100, the new logic grows chunk
sizes exponentially -- like the current logic, but lags behind it at
early steps.
* stabilize
When chunk size reaches readahead-max, or comes close to
(readahead-ratio * thrashing-threshold)
it stops growing and stay there.
The main difference with the stock readahead logic occurs at and after
the time chunk size stops growing:
- The current logic grows chunk size exponentially in normal and
decreases it by 2 each time thrashing is seen. That can lead to
thrashing with almost every readahead for very slow streams.
- The new logic can stop at a size below the thrashing-threshold,
and stay there stable.
* on stream speed up or system load fall
thrashing-threshold follows up and chunk size is likely to be enlarged.
* on stream slow down or system load rocket up
thrashing-threshold falls down.
If thrashing happened, the next read would be treated as a random read,
and with another read the chunk-size-growing-phase is restarted.
For a slow stream that has (thrashing-threshold < readahead-max):
- When readahead-ratio = 100, there is only one chunk in cache at
most time;
- When readahead-ratio = 50, there are two chunks in cache at most
time.
- Lowing readahead-ratio helps gracefully cut down the chunk size
without thrashing.
OVERHEADS
=========
The context based method has some overheads over the stateful method, due
to more lockings and memory scans.
Running oprofile on the following command shows the following differences:
# diff sparse sparse1
total oprofile samples run1 run2
stateful method 560482 558696
stateless method 564463 559413
So the average overhead is about 0.4%.
Detailed diffprofile data:
# diffprofile oprofile.50.stateful oprofile.50.stateless
2998 41.1% isolate_lru_pages
2669 26.4% shrink_zone
1822 14.7% system_call
1419 27.6% radix_tree_delete
1376 14.8% _raw_write_lock
1279 27.4% free_pages_bulk
1111 12.0% _raw_write_unlock
1035 43.3% free_hot_cold_page
849 15.3% unlock_page
786 29.6% page_referenced
710 4.6% kmap_atomic
651 26.4% __pagevec_release_nonlru
586 16.1% __rmqueue
578 11.3% find_get_page
481 15.5% page_waitqueue
440 6.6% add_to_page_cache
420 33.7% fget_light
260 4.3% get_page_from_freelist
223 13.7% find_busiest_group
221 35.1% mutex_debug_check_no_locks_freed
211 0.0% radix_tree_scan_hole
198 35.5% delay_tsc
195 14.8% ext3_get_branch
182 12.6% profile_tick
173 0.0% radix_tree_cache_lookup_node
164 22.9% find_next_bit
162 50.3% page_cache_readahead_adaptive
...
106 0.0% radix_tree_scan_hole_backward
...
-51 -7.6% radix_tree_preload
...
-68 -2.1% radix_tree_insert
...
-87 -2.0% mark_page_accessed
-88 -2.0% __pagevec_lru_add
-103 -7.7% softlockup_tick
-107 -71.8% free_block
-122 -77.7% do_IRQ
-132 -82.0% do_timer
-140 -47.1% ack_edge_ioapic_vector
-168 -81.2% handle_IRQ_event
-192 -35.2% irq_entries_start
-204 -14.8% rw_verify_area
-214 -13.2% account_system_time
-233 -9.5% radix_tree_lookup_node
-234 -16.6% scheduler_tick
-259 -58.7% __do_IRQ
-266 -6.8% put_page
-318 -29.3% rcu_pending
-333 -3.0% do_generic_mapping_read
-337 -28.3% hrtimer_run_queues
-493 -27.0% __rcu_pending
-1038 -9.4% default_idle
-3323 -3.5% __copy_to_user_ll
-10331 -5.9% do_mpage_readpage
# diffprofile oprofile.50.stateful2 oprofile.50.stateless2
1739 1.1% do_mpage_readpage
833 0.9% __copy_to_user_ll
340 21.3% find_busiest_group
288 9.5% free_hot_cold_page
261 4.6% _raw_read_unlock
239 3.9% get_page_from_freelist
201 0.0% radix_tree_scan_hole
163 14.3% raise_softirq
160 0.0% radix_tree_cache_lookup_node
160 11.8% update_process_times
136 9.3% fget_light
121 35.1% page_cache_readahead_adaptive
117 36.0% restore_all
117 2.8% mark_page_accessed
109 6.4% rebalance_tick
107 9.4% sys_read
102 0.0% radix_tree_scan_hole_backward
...
63 4.0% readahead_cache_hit
...
-10 -15.9% radix_tree_node_alloc
...
-39 -1.7% radix_tree_lookup_node
-39 -10.3% irq_entries_start
-43 -1.3% radix_tree_insert
...
-47 -4.6% __do_page_cache_readahead
-64 -9.3% radix_tree_preload
-65 -5.4% rw_verify_area
-65 -2.2% vfs_read
-70 -4.7% timer_interrupt
-71 -1.0% __wake_up_bit
-73 -1.1% radix_tree_delete
-79 -12.6% __mod_page_state_offset
-94 -1.8% __find_get_block
-94 -2.2% __pagevec_lru_add
-102 -1.7% free_pages_bulk
-116 -1.3% _raw_read_lock
-123 -7.4% do_sync_read
-130 -8.4% ext3_get_blocks_handle
-142 -3.8% put_page
-146 -7.9% mpage_readpages
-147 -5.6% apic_timer_interrupt
-168 -1.6% _raw_write_unlock
-172 -5.0% page_referenced
-206 -3.2% unlock_page
-212 -15.0% restore_nocheck
-213 -2.1% default_idle
-245 -5.0% __rmqueue
-278 -4.3% find_get_page
-282 -2.1% system_call
-287 -11.8% run_timer_softirq
-300 -2.7% _raw_write_lock
-420 -3.2% shrink_zone
-661 -5.7% isolate_lru_pages
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/readahead.c | 329 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 329 insertions(+)
--- linux-2.6.17-rc4-mm3.orig/mm/readahead.c
+++ linux-2.6.17-rc4-mm3/mm/readahead.c
@@ -1185,6 +1185,335 @@ state_based_readahead(struct address_spa
}
/*
+ * Page cache context based estimation of read-ahead/look-ahead size/index.
+ *
+ * The logic first looks around to find the start point of next read-ahead,
+ * and then, if necessary, looks backward in the inactive_list to get an
+ * estimation of the thrashing-threshold.
+ *
+ * The estimation theory can be illustrated with figure:
+ *
+ * chunk A chunk B chunk C head
+ *
+ * l01 l11 l12 l21 l22
+ *| |-->|-->| |------>|-->| |------>|
+ *| +-------+ +-----------+ +-------------+ |
+ *| | # | | # | | # | |
+ *| +-------+ +-----------+ +-------------+ |
+ *| |<==============|<===========================|<============================|
+ * L0 L1 L2
+ *
+ * Let f(l) = L be a map from
+ * l: the number of pages read by the stream
+ * to
+ * L: the number of pages pushed into inactive_list in the mean time
+ * then
+ * f(l01) <= L0
+ * f(l11 + l12) = L1
+ * f(l21 + l22) = L2
+ * ...
+ * f(l01 + l11 + ...) <= Sum(L0 + L1 + ...)
+ * <= Length(inactive_list) = f(thrashing-threshold)
+ *
+ * So the count of countinuous history pages left in the inactive_list is always
+ * a lower estimation of the true thrashing-threshold.
+ */
+
+#define PAGE_REFCNT_0 0
+#define PAGE_REFCNT_1 (1 << PG_referenced)
+#define PAGE_REFCNT_2 (1 << PG_active)
+#define PAGE_REFCNT_3 ((1 << PG_active) | (1 << PG_referenced))
+#define PAGE_REFCNT_MASK PAGE_REFCNT_3
+
+/*
+ * STATUS REFERENCE COUNT
+ * __ 0
+ * _R PAGE_REFCNT_1
+ * A_ PAGE_REFCNT_2
+ * AR PAGE_REFCNT_3
+ *
+ * A/R: Active / Referenced
+ */
+static inline unsigned long page_refcnt(struct page *page)
+{
+ return page->flags & PAGE_REFCNT_MASK;
+}
+
+/*
+ * STATUS REFERENCE COUNT TYPE
+ * __ 0 fresh
+ * _R PAGE_REFCNT_1 stale
+ * A_ PAGE_REFCNT_2 disturbed once
+ * AR PAGE_REFCNT_3 disturbed twice
+ *
+ * A/R: Active / Referenced
+ */
+static inline unsigned long cold_page_refcnt(struct page *page)
+{
+ if (!page || PageActive(page))
+ return 0;
+
+ return page_refcnt(page);
+}
+
+/*
+ * Find past-the-end index of the segment at @index.
+ */
+static pgoff_t find_segtail(struct address_space *mapping,
+ pgoff_t index, unsigned long max_scan)
+{
+ pgoff_t ra_index;
+
+ cond_resched();
+ read_lock_irq(&mapping->tree_lock);
+ ra_index = radix_tree_scan_hole(&mapping->page_tree, index, max_scan);
+ read_unlock_irq(&mapping->tree_lock);
+
+ if (ra_index <= index + max_scan)
+ return ra_index;
+ else
+ return 0;
+}
+
+/*
+ * Find past-the-end index of the segment before @index.
+ */
+static pgoff_t find_segtail_backward(struct address_space *mapping,
+ pgoff_t index, unsigned long max_scan)
+{
+ struct radix_tree_cache cache;
+ struct page *page;
+ pgoff_t origin;
+
+ origin = index;
+ if (max_scan > index)
+ max_scan = index;
+
+ cond_resched();
+ radix_tree_cache_init(&cache);
+ read_lock_irq(&mapping->tree_lock);
+ for (; origin - index < max_scan;) {
+ page = radix_tree_cache_lookup(&mapping->page_tree,
+ &cache, --index);
+ if (page) {
+ read_unlock_irq(&mapping->tree_lock);
+ return index + 1;
+ }
+ }
+ read_unlock_irq(&mapping->tree_lock);
+
+ return 0;
+}
+
+/*
+ * Count/estimate cache hits in range [first_index, last_index].
+ * The estimation is simple and optimistic.
+ */
+static int count_cache_hit(struct address_space *mapping,
+ pgoff_t first_index, pgoff_t last_index)
+{
+ struct page *page;
+ int size = last_index - first_index + 1;
+ int count = 0;
+ int i;
+
+ cond_resched();
+ read_lock_irq(&mapping->tree_lock);
+
+ /*
+ * The first page may well is chunk head and has been accessed,
+ * so it is index 0 that makes the estimation optimistic. This
+ * behavior guarantees a readahead when (size < ra_max) and
+ * (readahead_hit_rate >= 16).
+ */
+ for (i = 0; i < 16;) {
+ page = __find_page(mapping, first_index +
+ size * ((i++ * 29) & 15) / 16);
+ if (cold_page_refcnt(page) >= PAGE_REFCNT_1 && ++count >= 2)
+ break;
+ }
+
+ read_unlock_irq(&mapping->tree_lock);
+
+ return size * count / i;
+}
+
+/*
+ * Look back and check history pages to estimate thrashing-threshold.
+ */
+static unsigned long query_page_cache_segment(struct address_space *mapping,
+ struct file_ra_state *ra,
+ unsigned long *remain, pgoff_t offset,
+ unsigned long ra_min, unsigned long ra_max)
+{
+ pgoff_t index;
+ unsigned long count;
+ unsigned long nr_lookback;
+ struct radix_tree_cache cache;
+
+ /*
+ * Scan backward and check the near @ra_max pages.
+ * The count here determines ra_size.
+ */
+ cond_resched();
+ read_lock_irq(&mapping->tree_lock);
+ index = radix_tree_scan_hole_backward(&mapping->page_tree,
+ offset, ra_max);
+ read_unlock_irq(&mapping->tree_lock);
+
+ *remain = offset - index;
+
+ if (offset == ra->readahead_index && ra_cache_hit_ok(ra))
+ count = *remain;
+ else if (count_cache_hit(mapping, index + 1, offset) *
+ readahead_hit_rate >= *remain)
+ count = *remain;
+ else
+ count = ra_min;
+
+ /*
+ * Unnecessary to count more?
+ */
+ if (count < ra_max)
+ goto out;
+
+ if (unlikely(ra->flags & RA_FLAG_NO_LOOKAHEAD))
+ goto out;
+
+ /*
+ * Check the far pages coarsely.
+ * The enlarged count here helps increase la_size.
+ */
+ nr_lookback = ra_max * (LOOKAHEAD_RATIO + 1) *
+ 100 / (readahead_ratio | 1);
+
+ cond_resched();
+ radix_tree_cache_init(&cache);
+ read_lock_irq(&mapping->tree_lock);
+ for (count += ra_max; count < nr_lookback; count += ra_max) {
+ struct radix_tree_node *node;
+ node = radix_tree_cache_lookup_parent(&mapping->page_tree,
+ &cache, offset - count, 1);
+ if (!node)
+ break;
+ }
+ read_unlock_irq(&mapping->tree_lock);
+
+out:
+ /*
+ * For sequential read that extends from index 0, the counted value
+ * may well be far under the true threshold, so return it unmodified
+ * for further processing in adjust_rala_aggressive().
+ */
+ if (count >= offset)
+ count = offset;
+ else
+ count = max(ra_min, count * readahead_ratio / 100);
+
+ ddprintk("query_page_cache_segment: "
+ "ino=%lu, idx=%lu, count=%lu, remain=%lu\n",
+ mapping->host->i_ino, offset, count, *remain);
+
+ return count;
+}
+
+/*
+ * Determine the request parameters for context based read-ahead that extends
+ * from start of file.
+ *
+ * The major weakness of stateless method is perhaps the slow grow up speed of
+ * ra_size. The logic tries to make up for this in the important case of
+ * sequential reads that extend from start of file. In this case, the ra_size
+ * is not chosen to make the whole next chunk safe (as in normal ones). Only
+ * half of which is safe. The added 'unsafe' half is the look-ahead part. It
+ * is expected to be safeguarded by rescue_pages() when the previous chunks are
+ * lost.
+ */
+static int adjust_rala_aggressive(unsigned long ra_max,
+ unsigned long *ra_size, unsigned long *la_size)
+{
+ pgoff_t index = *ra_size;
+
+ *ra_size -= min(*ra_size, *la_size);
+ *ra_size = *ra_size * readahead_ratio / 100;
+ *la_size = index * readahead_ratio / 100;
+ *ra_size += *la_size;
+
+ if (*ra_size > ra_max)
+ *ra_size = ra_max;
+ if (*la_size > *ra_size)
+ *la_size = *ra_size;
+
+ return 1;
+}
+
+/*
+ * Main function for page context based read-ahead.
+ *
+ * RETURN VALUE HINT
+ * 1 @ra contains a valid ra-request, please submit it
+ * 0 no seq-pattern discovered, please try the next method
+ * -1 please don't do _any_ readahead
+ */
+static int
+try_context_based_readahead(struct address_space *mapping,
+ struct file_ra_state *ra, struct page *prev_page,
+ struct page *page, pgoff_t index,
+ unsigned long ra_min, unsigned long ra_max)
+{
+ pgoff_t ra_index;
+ unsigned long ra_size;
+ unsigned long la_size;
+ unsigned long remain_pages;
+
+ /* Where to start read-ahead?
+ * NFSv3 daemons may process adjacent requests in parallel,
+ * leading to many locally disordered, globally sequential reads.
+ * So do not require nearby history pages to be present or accessed.
+ */
+ if (page) {
+ ra_index = find_segtail(mapping, index, ra_max * 5 / 4);
+ if (!ra_index)
+ return -1;
+ } else if (prev_page || find_page(mapping, index - 1)) {
+ ra_index = index;
+ } else if (readahead_hit_rate > 1) {
+ ra_index = find_segtail_backward(mapping, index,
+ readahead_hit_rate + ra_min);
+ if (!ra_index)
+ return 0;
+ ra_min += 2 * (index - ra_index);
+ index = ra_index; /* pretend the request starts here */
+ } else
+ return 0;
+
+ ra_size = query_page_cache_segment(mapping, ra, &remain_pages,
+ index, ra_min, ra_max);
+
+ la_size = ra_index - index;
+ if (page && remain_pages <= la_size &&
+ remain_pages < index && la_size > 1) {
+ rescue_pages(page, la_size);
+ return -1;
+ }
+
+ if (ra_size == index) {
+ if (!adjust_rala_aggressive(ra_max, &ra_size, &la_size))
+ return -1;
+ ra_set_class(ra, RA_CLASS_CONTEXT_AGGRESSIVE);
+ } else {
+ if (!adjust_rala(ra_max, &ra_size, &la_size))
+ return -1;
+ ra_set_class(ra, RA_CLASS_CONTEXT);
+ }
+
+ ra_set_index(ra, index, ra_index);
+ ra_set_size(ra, ra_size, la_size);
+
+ return 1;
+}
+
+/*
* ra_min is mainly determined by the size of cache memory. Reasonable?
*
* Table of concrete numbers for 4KB page size:
--
next prev parent reply other threads:[~2006-05-24 11:20 UTC|newest]
Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20060524111246.420010595@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 00/33] Adaptive read-ahead V12 Wu Fengguang
2006-05-25 15:44 ` Andrew Morton
2006-05-25 19:26 ` Michael Stone
2006-05-25 19:40 ` David Lang
2006-05-25 22:01 ` Andrew Morton
2006-05-25 20:28 ` David Lang
2006-05-26 0:48 ` Michael Stone
[not found] ` <20060526011939.GA6220@mail.ustc.edu.cn>
2006-05-26 1:19 ` Wu Fengguang
2006-05-26 2:10 ` Jon Smirl
2006-05-26 3:14 ` Nick Piggin
2006-05-26 14:00 ` Andi Kleen
2006-05-26 16:25 ` Andrew Morton
2006-05-26 23:54 ` Folkert van Heusden
2006-05-27 0:00 ` Con Kolivas
2006-05-27 0:08 ` Con Kolivas
2006-05-28 22:20 ` Diego Calleja
2006-05-28 22:31 ` kernel
[not found] ` <20060529030445.GB5994@mail.ustc.edu.cn>
2006-05-29 3:04 ` Wu Fengguang
[not found] ` <20060524111857.983845462@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 02/33] radixtree: look-aside cache Wu Fengguang
[not found] ` <20060524111858.357709745@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 03/33] radixtree: hole scanning functions Wu Fengguang
2006-05-25 16:19 ` Andrew Morton
[not found] ` <20060526070416.GB5135@mail.ustc.edu.cn>
2006-05-26 7:04 ` Wu Fengguang
[not found] ` <20060526110559.GA14398@mail.ustc.edu.cn>
2006-05-26 11:05 ` Wu Fengguang
2006-05-26 16:19 ` Andrew Morton
[not found] ` <20060524111858.869793445@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 04/33] readahead: page flag PG_readahead Wu Fengguang
2006-05-25 16:23 ` Andrew Morton
[not found] ` <20060526070646.GC5135@mail.ustc.edu.cn>
2006-05-26 7:06 ` Wu Fengguang
2006-05-24 12:27 ` Peter Zijlstra
[not found] ` <20060524123740.GA16304@mail.ustc.edu.cn>
2006-05-24 12:37 ` Wu Fengguang
2006-05-24 12:48 ` Peter Zijlstra
[not found] ` <20060524111859.540640819@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 05/33] readahead: refactor do_generic_mapping_read() Wu Fengguang
[not found] ` <20060524111859.909928820@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 06/33] readahead: refactor __do_page_cache_readahead() Wu Fengguang
2006-05-25 16:30 ` Andrew Morton
2006-05-25 22:33 ` Paul Mackerras
2006-05-25 22:40 ` Andrew Morton
[not found] ` <20060526071339.GE5135@mail.ustc.edu.cn>
2006-05-26 7:13 ` Wu Fengguang
[not found] ` <20060524111900.419314658@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 07/33] readahead: insert cond_resched() calls Wu Fengguang
[not found] ` <20060524111900.970898174@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 08/33] readahead: common macros Wu Fengguang
2006-05-25 5:56 ` Nick Piggin
[not found] ` <20060525104117.GE4996@mail.ustc.edu.cn>
2006-05-25 10:41 ` Wu Fengguang
2006-05-26 3:33 ` Nick Piggin
[not found] ` <20060526065906.GA5135@mail.ustc.edu.cn>
2006-05-26 6:59 ` Wu Fengguang
[not found] ` <20060525134224.GJ4996@mail.ustc.edu.cn>
2006-05-25 13:42 ` Wu Fengguang
2006-05-25 14:38 ` Andrew Morton
2006-05-25 16:33 ` Andrew Morton
[not found] ` <20060524111901.581603095@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 09/33] readahead: events accounting Wu Fengguang
2006-05-25 16:36 ` Andrew Morton
[not found] ` <20060526070943.GD5135@mail.ustc.edu.cn>
2006-05-26 7:09 ` Wu Fengguang
[not found] ` <20060527132002.GA4814@mail.ustc.edu.cn>
2006-05-27 13:20 ` Wu Fengguang
2006-05-29 8:19 ` Martin Peschke
[not found] ` <20060524111901.976888971@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 10/33] readahead: support functions Wu Fengguang
2006-05-25 5:13 ` Nick Piggin
[not found] ` <20060525111318.GH4996@mail.ustc.edu.cn>
2006-05-25 11:13 ` Wu Fengguang
2006-05-25 16:48 ` Andrew Morton
[not found] ` <20060526073114.GH5135@mail.ustc.edu.cn>
2006-05-26 7:31 ` Wu Fengguang
[not found] ` <20060524111902.491708692@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 11/33] readahead: sysctl parameters Wu Fengguang
2006-05-25 4:50 ` [PATCH 12/33] readahead: min/max sizes Nick Piggin
[not found] ` <20060525121206.GI4996@mail.ustc.edu.cn>
2006-05-25 12:12 ` Wu Fengguang
[not found] ` <20060524111903.510268987@localhost.localdomain>
2006-05-24 11:12 ` [PATCH 13/33] readahead: state based method - aging accounting Wu Fengguang
2006-05-26 17:04 ` Andrew Morton
[not found] ` <20060527062234.GB4991@mail.ustc.edu.cn>
2006-05-27 6:22 ` Wu Fengguang
2006-05-27 7:00 ` Andrew Morton
[not found] ` <20060527072201.GA5284@mail.ustc.edu.cn>
2006-05-27 7:22 ` Wu Fengguang
[not found] ` <20060524111904.019763011@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 14/33] readahead: state based method - data structure Wu Fengguang
2006-05-25 6:03 ` Nick Piggin
[not found] ` <20060525104353.GF4996@mail.ustc.edu.cn>
2006-05-25 10:43 ` Wu Fengguang
2006-05-26 17:05 ` Andrew Morton
[not found] ` <20060527070248.GD4991@mail.ustc.edu.cn>
2006-05-27 7:02 ` Wu Fengguang
[not found] ` <20060527082758.GF4991@mail.ustc.edu.cn>
2006-05-27 8:27 ` Wu Fengguang
[not found] ` <20060524111904.683513683@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 15/33] readahead: state based method - routines Wu Fengguang
2006-05-26 17:15 ` Andrew Morton
[not found] ` <20060527020616.GA7418@mail.ustc.edu.cn>
2006-05-27 2:06 ` Wu Fengguang
[not found] ` <20060524111905.586110688@localhost.localdomain>
2006-05-24 11:13 ` Wu Fengguang [this message]
2006-05-25 5:26 ` [PATCH 17/33] readahead: context based method Nick Piggin
[not found] ` <20060525080308.GB4996@mail.ustc.edu.cn>
2006-05-25 8:03 ` Wu Fengguang
2006-05-26 17:23 ` Andrew Morton
[not found] ` <20060527021252.GB7418@mail.ustc.edu.cn>
2006-05-27 2:12 ` Wu Fengguang
2006-05-26 17:27 ` Andrew Morton
[not found] ` <20060527080443.GE4991@mail.ustc.edu.cn>
2006-05-27 8:04 ` Wu Fengguang
2006-05-24 12:37 ` Peter Zijlstra
[not found] ` <20060524133353.GA16508@mail.ustc.edu.cn>
2006-05-24 13:33 ` Wu Fengguang
2006-05-24 15:53 ` Peter Zijlstra
[not found] ` <20060525012556.GA6111@mail.ustc.edu.cn>
2006-05-25 1:25 ` Wu Fengguang
[not found] ` <20060524111906.245276338@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 18/33] readahead: initial method - guiding sizes Wu Fengguang
[not found] ` <20060524111906.588647885@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 19/33] readahead: initial method - thrashing guard size Wu Fengguang
[not found] ` <20060524111907.134685550@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 20/33] readahead: initial method - expected read size Wu Fengguang
2006-05-25 5:34 ` [PATCH 22/33] readahead: initial method Nick Piggin
[not found] ` <20060525085957.GC4996@mail.ustc.edu.cn>
2006-05-25 8:59 ` Wu Fengguang
2006-05-26 17:29 ` [PATCH 20/33] readahead: initial method - expected read size Andrew Morton
[not found] ` <20060527063826.GC4991@mail.ustc.edu.cn>
2006-05-27 6:38 ` Wu Fengguang
[not found] ` <20060524111908.569533741@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 23/33] readahead: backward prefetching method Wu Fengguang
2006-05-26 17:37 ` Nate Diller
2006-05-26 19:22 ` Nathan Scott
[not found] ` <20060528123006.GC6478@mail.ustc.edu.cn>
2006-05-28 12:30 ` Wu Fengguang
[not found] ` <20060524111909.147416866@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 24/33] readahead: seeking reads method Wu Fengguang
[not found] ` <20060524111909.635589701@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 25/33] readahead: thrashing recovery method Wu Fengguang
[not found] ` <20060524111910.207894375@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 26/33] readahead: call scheme Wu Fengguang
[not found] ` <20060524111910.544274094@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 27/33] readahead: laptop mode Wu Fengguang
2006-05-26 17:38 ` Andrew Morton
[not found] ` <20060524111911.607080495@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 29/33] readahead: nfsd case Wu Fengguang
[not found] ` <20060524111912.156646847@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 30/33] readahead: turn on by default Wu Fengguang
[not found] ` <20060524111912.485160282@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 31/33] readahead: debug radix tree new functions Wu Fengguang
[not found] ` <20060524111912.967392912@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 32/33] readahead: debug traces showing accessed file names Wu Fengguang
[not found] ` <20060524111913.603476893@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 33/33] readahead: debug traces showing read patterns Wu Fengguang
[not found] ` <20060524111911.032100160@localhost.localdomain>
2006-05-24 11:13 ` [PATCH 28/33] readahead: loop case Wu Fengguang
2006-05-24 14:01 ` Limin Wang
[not found] ` <20060525154846.GA6907@mail.ustc.edu.cn>
2006-05-25 15:48 ` wfg
[not found] <20060526113906.084341801@localhost.localdomain>
[not found] ` <20060526115308.522890112@localhost.localdomain>
2006-05-26 11:39 ` [PATCH 17/33] readahead: context based method Wu Fengguang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=348469544.17438@ustc.edu.cn \
--to=wfg@mail.ustc.edu.cn \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox