* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
[not found] ` <1327509623.2720.52.camel@menhir>
@ 2012-01-25 17:32 ` James Bottomley
2012-01-25 18:28 ` Loke, Chetan
0 siblings, 1 reply; 15+ messages in thread
From: James Bottomley @ 2012-01-25 17:32 UTC (permalink / raw)
To: Steven Whitehouse
Cc: Loke, Chetan, Andreas Dilger, Wu Fengguang, Jan Kara, Jeff Moyer,
Andrea Arcangeli, linux-scsi, Mike Snitzer, neilb,
Christoph Hellwig, dm-devel, Boaz Harrosh, linux-fsdevel, lsf-pc,
Chris Mason, Darrick J.Wong, linux-mm
On Wed, 2012-01-25 at 16:40 +0000, Steven Whitehouse wrote:
> Hi,
>
> On Wed, 2012-01-25 at 11:22 -0500, Loke, Chetan wrote:
> > > If the reason for not setting a larger readahead value is just that it
> > > might increase memory pressure and thus decrease performance, is it
> > > possible to use a suitable metric from the VM in order to set the value
> > > automatically according to circumstances?
> > >
> >
> > How about tracking heuristics for 'read-hits from previous read-aheads'? If the hits are in acceptable range(user-configurable knob?) then keep seeking else back-off a little on the read-ahead?
> >
> > > Steve.
> >
> > Chetan Loke
>
> I'd been wondering about something similar to that. The basic scheme
> would be:
>
> - Set a page flag when readahead is performed
> - Clear the flag when the page is read (or on page fault for mmap)
> (i.e. when it is first used after readahead)
>
> Then when the VM scans for pages to eject from cache, check the flag and
> keep an exponential average (probably on a per-cpu basis) of the rate at
> which such flagged pages are ejected. That number can then be used to
> reduce the max readahead value.
>
> The questions are whether this would provide a fast enough reduction in
> readahead size to avoid problems? and whether the extra complication is
> worth it compared with using an overall metric for memory pressure?
>
> There may well be better solutions though,
So there are two separate problems mentioned here. The first is to
ensure that readahead (RA) pages are treated as more disposable than
accessed pages under memory pressure and then to derive a statistic for
futile RA (those pages that were read in but never accessed).
The first sounds really like its an LRU thing rather than adding yet
another page flag. We need a position in the LRU list for never
accessed ... that way they're first to be evicted as memory pressure
rises.
The second is you can derive this futile readahead statistic from the
LRU position of unaccessed pages ... you could keep this globally.
Now the problem: if you trash all unaccessed RA pages first, you end up
with the situation of say playing a movie under moderate memory pressure
that we do RA, then trash the RA page then have to re-read to display to
the user resulting in an undesirable uptick in read I/O.
Based on the above, it sounds like a better heuristic would be to evict
accessed clean pages at the top of the LRU list before unaccessed clean
pages because the expectation is that the unaccessed clean pages will be
accessed (that's after all, why we did the readahead). As RA pages age
in the LRU list, they become candidates for being futile, since they've
been in memory for a while and no-one has accessed them, leading to the
conclusion that they aren't ever going to be read.
So I think futility is a measure of unaccessed aging, not necessarily of
ejection (which is a memory pressure response).
James
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 17:32 ` [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics James Bottomley
@ 2012-01-25 18:28 ` Loke, Chetan
2012-01-25 18:37 ` Loke, Chetan
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Loke, Chetan @ 2012-01-25 18:28 UTC (permalink / raw)
To: James Bottomley, Steven Whitehouse
Cc: Andreas Dilger, Wu Fengguang, Jan Kara, Jeff Moyer,
Andrea Arcangeli, linux-scsi, Mike Snitzer, neilb,
Christoph Hellwig, dm-devel, Boaz Harrosh, linux-fsdevel, lsf-pc,
Chris Mason, Darrick J.Wong, linux-mm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 2500 bytes --]
> So there are two separate problems mentioned here. The first is to
> ensure that readahead (RA) pages are treated as more disposable than
> accessed pages under memory pressure and then to derive a statistic for
> futile RA (those pages that were read in but never accessed).
>
> The first sounds really like its an LRU thing rather than adding yet
> another page flag. We need a position in the LRU list for never
> accessed ... that way they're first to be evicted as memory pressure
> rises.
>
> The second is you can derive this futile readahead statistic from the
> LRU position of unaccessed pages ... you could keep this globally.
>
> Now the problem: if you trash all unaccessed RA pages first, you end up
> with the situation of say playing a movie under moderate memory
> pressure that we do RA, then trash the RA page then have to re-read to display
> to the user resulting in an undesirable uptick in read I/O.
>
> Based on the above, it sounds like a better heuristic would be to evict
> accessed clean pages at the top of the LRU list before unaccessed clean
> pages because the expectation is that the unaccessed clean pages will
> be accessed (that's after all, why we did the readahead). As RA pages age
Well, the movie example is one case where evicting unaccessed page may not be the right thing to do. But what about a workload that perform a random one-shot search?
The search was done and the RA'd blocks are of no use anymore. So it seems one solution would hurt another.
We can try to bring-in process run-time heuristics while evicting pages. So in the one-shot search case, the application did it's thing and went to sleep.
While the movie-app has a pretty good run-time and is still running. So be a little gentle(?) on such apps? Selective eviction?
In addition what if we do something like this:
RA block[X], RA block[X+1], ... , RA block[X+m]
Assume a block reads 'N' pages.
Evict unaccessed RA page 'a' from block[X+2] and not [X+1].
We might need tracking at the RA-block level. This way if a movie touched RA-page 'a' from block[X], it would at least have [X+1] in cache. And while [X+1] is being read, the new slow-down version of RA will not RA that many blocks.
Also, application's should use xxx_fadvise calls to give us hints...
> James
Chetan Loke
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ñb^[nö¢®×¥yÊ&{^®wr\x16«ë"&§iÖ¬ á¶Ú\x7fþËh¦Ø^ë^Æ¿\x0eízf¢¨ky
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 18:28 ` Loke, Chetan
@ 2012-01-25 18:37 ` Loke, Chetan
2012-01-25 18:37 ` James Bottomley
2012-01-25 18:44 ` Boaz Harrosh
2 siblings, 0 replies; 15+ messages in thread
From: Loke, Chetan @ 2012-01-25 18:37 UTC (permalink / raw)
To: James Bottomley, Steven Whitehouse
Cc: Andreas Dilger, Wu Fengguang, Jan Kara, Jeff Moyer,
Andrea Arcangeli, linux-scsi, Mike Snitzer, neilb,
Christoph Hellwig, dm-devel, Boaz Harrosh, linux-fsdevel, lsf-pc,
Chris Mason, Darrick J.Wong, linux-mm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 1444 bytes --]
>
> > So there are two separate problems mentioned here. The first is to
> > ensure that readahead (RA) pages are treated as more disposable than
> > accessed pages under memory pressure and then to derive a statistic for
> > futile RA (those pages that were read in but never accessed).
> >
> > The first sounds really like its an LRU thing rather than adding yet
> > another page flag. We need a position in the LRU list for never
> > accessed ... that way they're first to be evicted as memory pressure
> > rises.
> >
> > The second is you can derive this futile readahead statistic from the
> > LRU position of unaccessed pages ... you could keep this globally.
> >
> > Now the problem: if you trash all unaccessed RA pages first, you end up
> > with the situation of say playing a movie under moderate memory
> > pressure that we do RA, then trash the RA page then have to re-read to display
> > to the user resulting in an undesirable uptick in read I/O.
> >
James - now that I'm thinking about it. I think the movie should be fine because when we calculate the read-hit from RA'd pages, the movie RA blocks will get a good hit-ratio and hence it's RA'd blocks won't be touched. But then we might need to track the hit-ratio at the RA-block(?) level.
Chetan
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ñb^[nö¢®×¥yÊ&{^®wr\x16«ë"&§iÖ¬ á¶Ú\x7fþËh¦Ø^ë^Æ¿\x0eízf¢¨ky
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 18:28 ` Loke, Chetan
2012-01-25 18:37 ` Loke, Chetan
@ 2012-01-25 18:37 ` James Bottomley
2012-01-25 20:06 ` Chris Mason
2012-01-26 16:17 ` Loke, Chetan
2012-01-25 18:44 ` Boaz Harrosh
2 siblings, 2 replies; 15+ messages in thread
From: James Bottomley @ 2012-01-25 18:37 UTC (permalink / raw)
To: Loke, Chetan
Cc: Steven Whitehouse, Andreas Dilger, Andrea Arcangeli, Jan Kara,
Mike Snitzer, linux-scsi, neilb, dm-devel, Christoph Hellwig,
linux-mm, Jeff Moyer, Wu Fengguang, Boaz Harrosh, linux-fsdevel,
lsf-pc, Chris Mason, Darrick J.Wong
On Wed, 2012-01-25 at 13:28 -0500, Loke, Chetan wrote:
> > So there are two separate problems mentioned here. The first is to
> > ensure that readahead (RA) pages are treated as more disposable than
> > accessed pages under memory pressure and then to derive a statistic for
> > futile RA (those pages that were read in but never accessed).
> >
> > The first sounds really like its an LRU thing rather than adding yet
> > another page flag. We need a position in the LRU list for never
> > accessed ... that way they're first to be evicted as memory pressure
> > rises.
> >
> > The second is you can derive this futile readahead statistic from the
> > LRU position of unaccessed pages ... you could keep this globally.
> >
> > Now the problem: if you trash all unaccessed RA pages first, you end up
> > with the situation of say playing a movie under moderate memory
> > pressure that we do RA, then trash the RA page then have to re-read to display
> > to the user resulting in an undesirable uptick in read I/O.
> >
> > Based on the above, it sounds like a better heuristic would be to evict
> > accessed clean pages at the top of the LRU list before unaccessed clean
> > pages because the expectation is that the unaccessed clean pages will
> > be accessed (that's after all, why we did the readahead). As RA pages age
>
> Well, the movie example is one case where evicting unaccessed page may not be the right thing to do. But what about a workload that perform a random one-shot search?
> The search was done and the RA'd blocks are of no use anymore. So it seems one solution would hurt another.
Well not really: RA is always wrong for random reads. The whole purpose
of RA is assumption of sequential access patterns.
The point I'm making is that for the case where RA works (sequential
patterns), evicting unaccessed RA pages before accessed ones is the
wrong thing to do, so the heuristic isn't what you first thought of
(evicting unaccessed RA pages first).
For the random read case, either heuristic is wrong, so it doesn't
matter.
However, when you add the futility measure, random read processes will
end up with aged unaccessed RA pages, so its RA window will get closed.
> We can try to bring-in process run-time heuristics while evicting pages. So in the one-shot search case, the application did it's thing and went to sleep.
> While the movie-app has a pretty good run-time and is still running. So be a little gentle(?) on such apps? Selective eviction?
>
> In addition what if we do something like this:
>
> RA block[X], RA block[X+1], ... , RA block[X+m]
>
> Assume a block reads 'N' pages.
>
> Evict unaccessed RA page 'a' from block[X+2] and not [X+1].
>
> We might need tracking at the RA-block level. This way if a movie touched RA-page 'a' from block[X], it would at least have [X+1] in cache. And while [X+1] is being read, the new slow-down version of RA will not RA that many blocks.
>
> Also, application's should use xxx_fadvise calls to give us hints...
I think that's a bit over complex. As long as the futility measure
works, a sequential pattern read process gets a reasonable RA window.
The trick is to prove that the simple doesn't work before considering
the complex.
James
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 18:28 ` Loke, Chetan
2012-01-25 18:37 ` Loke, Chetan
2012-01-25 18:37 ` James Bottomley
@ 2012-01-25 18:44 ` Boaz Harrosh
2 siblings, 0 replies; 15+ messages in thread
From: Boaz Harrosh @ 2012-01-25 18:44 UTC (permalink / raw)
To: Loke, Chetan
Cc: James Bottomley, Steven Whitehouse, Andreas Dilger, Wu Fengguang,
Jan Kara, Jeff Moyer, Andrea Arcangeli, linux-scsi, Mike Snitzer,
neilb, Christoph Hellwig, dm-devel, linux-fsdevel, lsf-pc,
Chris Mason, Darrick J.Wong, linux-mm
On 01/25/2012 08:28 PM, Loke, Chetan wrote:
>> So there are two separate problems mentioned here. The first is to
>> ensure that readahead (RA) pages are treated as more disposable than
>> accessed pages under memory pressure and then to derive a statistic for
>> futile RA (those pages that were read in but never accessed).
>>
>> The first sounds really like its an LRU thing rather than adding yet
>> another page flag. We need a position in the LRU list for never
>> accessed ... that way they're first to be evicted as memory pressure
>> rises.
>>
>> The second is you can derive this futile readahead statistic from the
>> LRU position of unaccessed pages ... you could keep this globally.
>>
>> Now the problem: if you trash all unaccessed RA pages first, you end up
>> with the situation of say playing a movie under moderate memory
>> pressure that we do RA, then trash the RA page then have to re-read to display
>> to the user resulting in an undesirable uptick in read I/O.
>>
>> Based on the above, it sounds like a better heuristic would be to evict
>> accessed clean pages at the top of the LRU list before unaccessed clean
>> pages because the expectation is that the unaccessed clean pages will
>> be accessed (that's after all, why we did the readahead). As RA pages age
>
> Well, the movie example is one case where evicting unaccessed page
> may not be the right thing to do. But what about a workload that
> perform a random one-shot search? The search was done and the RA'd
> blocks are of no use anymore. So it seems one solution would hurt
> another.
>
I think there is a "seeky" flag the Kernel keeps to prevent read-ahead
in the case of seeks.
> We can try to bring-in process run-time heuristics while evicting
> pages. So in the one-shot search case, the application did it's thing
> and went to sleep. While the movie-app has a pretty good run-time and
> is still running. So be a little gentle(?) on such apps? Selective
> eviction?
>
> In addition what if we do something like this:
>
> RA block[X], RA block[X+1], ... , RA block[X+m]
>
> Assume a block reads 'N' pages.
>
> Evict unaccessed RA page 'a' from block[X+2] and not [X+1].
>
> We might need tracking at the RA-block level. This way if a movie
> touched RA-page 'a' from block[X], it would at least have [X+1] in
> cache. And while [X+1] is being read, the new slow-down version of RA
> will not RA that many blocks.
>
> Also, application's should use xxx_fadvise calls to give us hints...
>
Lets start by reading the number of pages requested by the read()
call, first.
The application is reading 4M and we still send 128K. Don't you
think that would be fadvise enough?
Lets start with the simple stuff.
The only flag I see on read pages is that if it's read ahead
pages that we Kernel initiated without an application request.
Like beyond the read() call or a surrounding an mmap read
that was not actually requested by the application.
For generality we always initiate a read in the page fault
and loose all the wonderful information the app gave us in the
different read API's. Lets start with that.
>
>> James
>
> Chetan Loke
Boaz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 18:37 ` James Bottomley
@ 2012-01-25 20:06 ` Chris Mason
2012-01-25 22:46 ` Andrea Arcangeli
2012-01-26 22:38 ` Dave Chinner
2012-01-26 16:17 ` Loke, Chetan
1 sibling, 2 replies; 15+ messages in thread
From: Chris Mason @ 2012-01-25 20:06 UTC (permalink / raw)
To: James Bottomley
Cc: Loke, Chetan, Steven Whitehouse, Andreas Dilger, Andrea Arcangeli,
Jan Kara, Mike Snitzer, linux-scsi, neilb, dm-devel,
Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, linux-fsdevel, lsf-pc, Darrick J.Wong
On Wed, Jan 25, 2012 at 12:37:48PM -0600, James Bottomley wrote:
> On Wed, 2012-01-25 at 13:28 -0500, Loke, Chetan wrote:
> > > So there are two separate problems mentioned here. The first is to
> > > ensure that readahead (RA) pages are treated as more disposable than
> > > accessed pages under memory pressure and then to derive a statistic for
> > > futile RA (those pages that were read in but never accessed).
> > >
> > > The first sounds really like its an LRU thing rather than adding yet
> > > another page flag. We need a position in the LRU list for never
> > > accessed ... that way they're first to be evicted as memory pressure
> > > rises.
> > >
> > > The second is you can derive this futile readahead statistic from the
> > > LRU position of unaccessed pages ... you could keep this globally.
> > >
> > > Now the problem: if you trash all unaccessed RA pages first, you end up
> > > with the situation of say playing a movie under moderate memory
> > > pressure that we do RA, then trash the RA page then have to re-read to display
> > > to the user resulting in an undesirable uptick in read I/O.
> > >
> > > Based on the above, it sounds like a better heuristic would be to evict
> > > accessed clean pages at the top of the LRU list before unaccessed clean
> > > pages because the expectation is that the unaccessed clean pages will
> > > be accessed (that's after all, why we did the readahead). As RA pages age
> >
> > Well, the movie example is one case where evicting unaccessed page may not be the right thing to do. But what about a workload that perform a random one-shot search?
> > The search was done and the RA'd blocks are of no use anymore. So it seems one solution would hurt another.
>
> Well not really: RA is always wrong for random reads. The whole purpose
> of RA is assumption of sequential access patterns.
Just to jump back, Jeff's benchmark that started this (on xfs and ext4):
- buffered 1MB reads get down to the scheduler in 128KB chunks
The really hard part about readahead is that you don't know what
userland wants. In Jeff's test, he's telling the kernel he wants 1MB
ios and our RA engine is doing 128KB ios.
We can talk about scaling up how big the RA windows get on their own,
but if userland asks for 1MB, we don't have to worry about futile RA, we
just have to make sure we don't oom the box trying to honor 1MB reads
from 5000 different procs.
-chris
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 20:06 ` Chris Mason
@ 2012-01-25 22:46 ` Andrea Arcangeli
2012-01-25 22:58 ` Jan Kara
` (2 more replies)
2012-01-26 22:38 ` Dave Chinner
1 sibling, 3 replies; 15+ messages in thread
From: Andrea Arcangeli @ 2012-01-25 22:46 UTC (permalink / raw)
To: Chris Mason, James Bottomley, Loke, Chetan, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer, linux-scsi, neilb,
dm-devel, Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, linux-fsdevel, lsf-pc, Darrick J.Wong
On Wed, Jan 25, 2012 at 03:06:13PM -0500, Chris Mason wrote:
> We can talk about scaling up how big the RA windows get on their own,
> but if userland asks for 1MB, we don't have to worry about futile RA, we
> just have to make sure we don't oom the box trying to honor 1MB reads
> from 5000 different procs.
:) that's for sure if read has a 1M buffer as destination. However
even cp /dev/sda reads/writes through a 32kb buffer, so it's not so
common to read in 1m buffers.
But I also would prefer to stay on the simple side (on a side note we
run out of page flags already on 32bit I think as I had to nuke
PG_buddy already).
Overall I think the risk of the pages being evicted before they can be
copied to userland is quite a minor risk. A 16G system with 100
readers all hitting on disk at the same time using 100M readahead
would still only create a 100m memory pressure... So it'd sure be ok,
100m is less than what kswapd keeps always free for example. Think a
4TB system. Especially if 128k fixed has been ok so far on a 1G system.
If we really want to be more dynamic than a setting at boot depending
on ram size, we could limit it to a fraction of freeable memory (using
similar math to determine_dirtyable_memory, maybe calling it over time
but not too frequently to reduce the overhead). Like if there's 0
memory freeable keep it low. If there's 1G freeable out of that math
(and we assume the readahead hit rate is near 100%), raise the maximum
readahead to 1M even if the total ram is only 1G. So we allow up to
1000 readers before we even recycle the readahead.
I doubt the complexity of tracking exactly how many pages are getting
recycled before they're copied to userland would be worth it, besides
it'd be 0% for 99% of systems and workloads.
Way more important is to have feedback on the readahead hits and be
sure when readahead is raised to the maximum the hit rate is near 100%
and fallback to lower readaheads if we don't get that hit rate. But
that's not a VM problem and it's a readahead issue only.
The actual VM pressure side of it, sounds minor issue if the hit rate
of the readahead cache is close to 100%.
The config option is also ok with me, but I think it'd be nicer to set
it at boot depending on ram size (one less option to configure
manually and zero overhead).
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 22:46 ` Andrea Arcangeli
@ 2012-01-25 22:58 ` Jan Kara
2012-01-26 8:59 ` Boaz Harrosh
2012-01-26 16:40 ` Loke, Chetan
2 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2012-01-25 22:58 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Chris Mason, James Bottomley, Loke, Chetan, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer, linux-scsi, neilb,
dm-devel, Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, linux-fsdevel, lsf-pc, Darrick J.Wong
On Wed 25-01-12 23:46:14, Andrea Arcangeli wrote:
> On Wed, Jan 25, 2012 at 03:06:13PM -0500, Chris Mason wrote:
> > We can talk about scaling up how big the RA windows get on their own,
> > but if userland asks for 1MB, we don't have to worry about futile RA, we
> > just have to make sure we don't oom the box trying to honor 1MB reads
> > from 5000 different procs.
>
> :) that's for sure if read has a 1M buffer as destination. However
> even cp /dev/sda reads/writes through a 32kb buffer, so it's not so
> common to read in 1m buffers.
>
> But I also would prefer to stay on the simple side (on a side note we
> run out of page flags already on 32bit I think as I had to nuke
> PG_buddy already).
>
> Overall I think the risk of the pages being evicted before they can be
> copied to userland is quite a minor risk. A 16G system with 100
> readers all hitting on disk at the same time using 100M readahead
> would still only create a 100m memory pressure... So it'd sure be ok,
> 100m is less than what kswapd keeps always free for example. Think a
> 4TB system. Especially if 128k fixed has been ok so far on a 1G system.
>
> If we really want to be more dynamic than a setting at boot depending
> on ram size, we could limit it to a fraction of freeable memory (using
> similar math to determine_dirtyable_memory, maybe calling it over time
> but not too frequently to reduce the overhead). Like if there's 0
> memory freeable keep it low. If there's 1G freeable out of that math
> (and we assume the readahead hit rate is near 100%), raise the maximum
> readahead to 1M even if the total ram is only 1G. So we allow up to
> 1000 readers before we even recycle the readahead.
>
> I doubt the complexity of tracking exactly how many pages are getting
> recycled before they're copied to userland would be worth it, besides
> it'd be 0% for 99% of systems and workloads.
>
> Way more important is to have feedback on the readahead hits and be
> sure when readahead is raised to the maximum the hit rate is near 100%
> and fallback to lower readaheads if we don't get that hit rate. But
> that's not a VM problem and it's a readahead issue only.
>
> The actual VM pressure side of it, sounds minor issue if the hit rate
> of the readahead cache is close to 100%.
>
> The config option is also ok with me, but I think it'd be nicer to set
> it at boot depending on ram size (one less option to configure
> manually and zero overhead).
Yeah. I'd also keep it simple. Tuning max readahead size based on
available memory (and device size) once in a while is about the maximum
complexity I'd consider meaningful. If you have real data that shows
problems which are not solved by that simple strategy, then sure, we can
speak about more complex algorithms. But currently I don't think they are
needed.
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 22:46 ` Andrea Arcangeli
2012-01-25 22:58 ` Jan Kara
@ 2012-01-26 8:59 ` Boaz Harrosh
2012-01-26 16:40 ` Loke, Chetan
2 siblings, 0 replies; 15+ messages in thread
From: Boaz Harrosh @ 2012-01-26 8:59 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Chris Mason, James Bottomley, Loke, Chetan, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer, linux-scsi, neilb,
dm-devel, Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
linux-fsdevel, lsf-pc, Darrick J.Wong
On 01/26/2012 12:46 AM, Andrea Arcangeli wrote:
> On Wed, Jan 25, 2012 at 03:06:13PM -0500, Chris Mason wrote:
>> We can talk about scaling up how big the RA windows get on their own,
>> but if userland asks for 1MB, we don't have to worry about futile RA, we
>> just have to make sure we don't oom the box trying to honor 1MB reads
>> from 5000 different procs.
>
> :) that's for sure if read has a 1M buffer as destination. However
> even cp /dev/sda reads/writes through a 32kb buffer, so it's not so
> common to read in 1m buffers.
>
That's not so true. cp is a bad example because it's brain dead and
someone should fix it. cp performance is terrible. Even KDE's GUI
copy is better.
But applications (and dd users) that do care about read performance
do use large buffers and want the Kernel to not ignore that.
What a better hint for Kernel is the read() destination buffer size.
> But I also would prefer to stay on the simple side (on a side note we
> run out of page flags already on 32bit I think as I had to nuke
> PG_buddy already).
>
So what would be more simple then not ignoring read() request
size from application, which will give applications all the control
they need.
<snip> (I Agree)
> The config option is also ok with me, but I think it'd be nicer to set
> it at boot depending on ram size (one less option to configure
> manually and zero overhead).
If you actually take into account the destination buffer size, you'll see
that the read-ahead size becomes less important for these workloads that
actually care. But Yes some mount time heuristics could be nice, depending
on DEV size and MEM size.
For example in my file-system with self registered BDI I set readhead sizes
according to raid-strip sizes and such so to get good read performance.
And speaking of reads and readhead. What about alignments? both of offset
and length? though in reads it's not so important. One thing some people
have ask for is raid-verify-reads as a mount option.
Thanks
Boaz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 18:37 ` James Bottomley
2012-01-25 20:06 ` Chris Mason
@ 2012-01-26 16:17 ` Loke, Chetan
1 sibling, 0 replies; 15+ messages in thread
From: Loke, Chetan @ 2012-01-26 16:17 UTC (permalink / raw)
To: James Bottomley
Cc: Steven Whitehouse, Andreas Dilger, Andrea Arcangeli, Jan Kara,
Mike Snitzer, linux-scsi, neilb, dm-devel, Christoph Hellwig,
linux-mm, Jeff Moyer, Wu Fengguang, Boaz Harrosh, linux-fsdevel,
lsf-pc, Chris Mason, Darrick J.Wong
> > Well, the movie example is one case where evicting unaccessed page may not be the right thing to do. But what about a workload that perform a random one-shot search?
> > The search was done and the RA'd blocks are of no use anymore. So it seems one solution would hurt another.
>
> Well not really: RA is always wrong for random reads. The whole purpose of RA is assumption of sequential access patterns.
>
James - I must agree that 'random' was not the proper choice of word here. What I meant was this -
search-app reads enough data to trick the lazy/deferred-RA logic. RA thinks, oh well, this is now a sequential pattern and will RA.
But all this search-app did was that it kept reading till it found what it was looking for. Once it was done, it went back to sleep waiting for the next query.
Now all that RA data could be of total waste if the read-hit on the RA data-set was 'zero percent'.
Some would argue that how would we(the kernel) know that the next query may not be close the earlier data-set? Well, we don't and we may not want to. That is why the application better know how to use XXX_advise calls. If they are not using it then well it's their problem. The app knows about the statistics/etc about the queries. What was used and what wasn't.
> James
Chetan Loke
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 22:46 ` Andrea Arcangeli
2012-01-25 22:58 ` Jan Kara
2012-01-26 8:59 ` Boaz Harrosh
@ 2012-01-26 16:40 ` Loke, Chetan
2012-01-26 17:00 ` Andreas Dilger
2012-02-03 12:37 ` Wu Fengguang
2 siblings, 2 replies; 15+ messages in thread
From: Loke, Chetan @ 2012-01-26 16:40 UTC (permalink / raw)
To: Andrea Arcangeli, Chris Mason, James Bottomley, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer, linux-scsi, neilb,
dm-devel, Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, linux-fsdevel, lsf-pc, Darrick J.Wong
> From: Andrea Arcangeli [mailto:aarcange@redhat.com]
> Sent: January 25, 2012 5:46 PM
....
> Way more important is to have feedback on the readahead hits and be
> sure when readahead is raised to the maximum the hit rate is near 100%
> and fallback to lower readaheads if we don't get that hit rate. But
> that's not a VM problem and it's a readahead issue only.
>
A quick google showed up - http://kerneltrap.org/node/6642
Interesting thread to follow. I haven't looked further as to what was
merged and what wasn't.
A quote from the patch - " It works by peeking into the file cache and
check if there are any history pages present or accessed."
Now I don't understand anything about this but I would think digging the
file-cache isn't needed(?). So, yes, a simple RA hit-rate feedback could
be fine.
And 'maybe' for adaptive RA just increase the RA-blocks by '1'(or some
N) over period of time. No more smartness. A simple 10 line function is
easy to debug/maintain. That is, a scaled-down version of
ramp-up/ramp-down. Don't go crazy by ramping-up/down after every RA(like
SCSI LLDD madness). Wait for some event to happen.
I can see where Andrew Morton's concerns could be(just my
interpretation). We may not want to end up like a protocol state machine
code: tcp slow-start, then increase , then congestion, then let's
back-off. hmmm, slow-start is a problem for my business logic, so let's
speed-up slow-start ;).
Chetan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-26 16:40 ` Loke, Chetan
@ 2012-01-26 17:00 ` Andreas Dilger
2012-01-26 17:16 ` Loke, Chetan
2012-02-03 12:37 ` Wu Fengguang
1 sibling, 1 reply; 15+ messages in thread
From: Andreas Dilger @ 2012-01-26 17:00 UTC (permalink / raw)
To: Loke, Chetan
Cc: Andrea Arcangeli, Chris Mason, James Bottomley, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer,
<linux-scsi@vger.kernel.org>, <neilb@suse.de>,
<dm-devel@redhat.com>, Christoph Hellwig,
<linux-mm@kvack.org>, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, <linux-fsdevel@vger.kernel.org>,
<lsf-pc@lists.linux-foundation.org>, Darrick J.Wong
On 2012-01-26, at 9:40, "Loke, Chetan" <Chetan.Loke@netscout.com> wrote:
> And 'maybe' for adaptive RA just increase the RA-blocks by '1'(or some
> N) over period of time. No more smartness. A simple 10 line function is
> easy to debug/maintain. That is, a scaled-down version of
> ramp-up/ramp-down. Don't go crazy by ramping-up/down after every RA(like
> SCSI LLDD madness). Wait for some event to happen.
Doing 1-block readahead increments is a performance disaster on RAID-5/6. That means you seek all the disks, but use only a fraction of the data that the controller read internally and had to parity check.
It makes more sense to keep the read units the same size as write units (1 MB or as dictated by RAID geometry) that the filesystem is also hopefully using for allocation. When doing a readahead it should fetch the whole chunk at one time, then not do another until it needs another full chunk.
Cheers, Andreas
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-26 17:00 ` Andreas Dilger
@ 2012-01-26 17:16 ` Loke, Chetan
0 siblings, 0 replies; 15+ messages in thread
From: Loke, Chetan @ 2012-01-26 17:16 UTC (permalink / raw)
To: Andreas Dilger
Cc: Andrea Arcangeli, Chris Mason, James Bottomley, Steven Whitehouse,
Jan Kara, Mike Snitzer, linux-scsi, neilb, dm-devel,
Christoph Hellwig, linux-mm, Jeff Moyer, Wu Fengguang,
Boaz Harrosh, linux-fsdevel, lsf-pc, Darrick J.Wong
> > And 'maybe' for adaptive RA just increase the RA-blocks by '1'(or
some N) over period of time. No more smartness. A simple 10 line
function is
> > easy to debug/maintain. That is, a scaled-down version of
ramp-up/ramp-down. Don't go crazy by ramping-up/down after every RA(like
> > SCSI LLDD madness). Wait for some event to happen.
>
> Doing 1-block readahead increments is a performance disaster on RAID-
> 5/6. That means you seek all the disks, but use only a fraction of the
> data that the controller read internally and had to parity check.
>
> It makes more sense to keep the read units the same size as write
units
> (1 MB or as dictated by RAID geometry) that the filesystem is also
> hopefully using for allocation. When doing a readahead it should
fetch
> the whole chunk at one time, then not do another until it needs
another
> full chunk.
>
I was using it loosely(don't confuse it with 1 block as in 4K :). RA
could be tied to whatever appropriate parameters depending on the
setup(underlying backing store) etc.
But the point I'm trying to make is to (may be)keep the adaptive logic
simple. So if you start with RA-chunk == 512KB/xMB, then when we
increment it, do something like (RA-chunk << N).
BTW, it's not just RAID but also different abstractions you might have.
Stripe-width worth of RA is still useless if your LVM chunk is N *
stripe-width.
> Cheers, Andreas
Chetan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-25 20:06 ` Chris Mason
2012-01-25 22:46 ` Andrea Arcangeli
@ 2012-01-26 22:38 ` Dave Chinner
1 sibling, 0 replies; 15+ messages in thread
From: Dave Chinner @ 2012-01-26 22:38 UTC (permalink / raw)
To: Chris Mason, James Bottomley, Loke, Chetan, Steven Whitehouse,
Andreas Dilger, Andrea Arcangeli, Jan Kara, Mike Snitzer,
linux-scsi, neilb, dm-devel, Christoph Hellwig, linux-mm,
Jeff Moyer, Wu Fengguang, Boaz Harrosh, linux-fsdevel, lsf-pc,
Darrick J.Wong
On Wed, Jan 25, 2012 at 03:06:13PM -0500, Chris Mason wrote:
> On Wed, Jan 25, 2012 at 12:37:48PM -0600, James Bottomley wrote:
> > On Wed, 2012-01-25 at 13:28 -0500, Loke, Chetan wrote:
> > > > So there are two separate problems mentioned here. The first is to
> > > > ensure that readahead (RA) pages are treated as more disposable than
> > > > accessed pages under memory pressure and then to derive a statistic for
> > > > futile RA (those pages that were read in but never accessed).
> > > >
> > > > The first sounds really like its an LRU thing rather than adding yet
> > > > another page flag. We need a position in the LRU list for never
> > > > accessed ... that way they're first to be evicted as memory pressure
> > > > rises.
> > > >
> > > > The second is you can derive this futile readahead statistic from the
> > > > LRU position of unaccessed pages ... you could keep this globally.
> > > >
> > > > Now the problem: if you trash all unaccessed RA pages first, you end up
> > > > with the situation of say playing a movie under moderate memory
> > > > pressure that we do RA, then trash the RA page then have to re-read to display
> > > > to the user resulting in an undesirable uptick in read I/O.
> > > >
> > > > Based on the above, it sounds like a better heuristic would be to evict
> > > > accessed clean pages at the top of the LRU list before unaccessed clean
> > > > pages because the expectation is that the unaccessed clean pages will
> > > > be accessed (that's after all, why we did the readahead). As RA pages age
> > >
> > > Well, the movie example is one case where evicting unaccessed page may not be the right thing to do. But what about a workload that perform a random one-shot search?
> > > The search was done and the RA'd blocks are of no use anymore. So it seems one solution would hurt another.
> >
> > Well not really: RA is always wrong for random reads. The whole purpose
> > of RA is assumption of sequential access patterns.
>
> Just to jump back, Jeff's benchmark that started this (on xfs and ext4):
>
> - buffered 1MB reads get down to the scheduler in 128KB chunks
>
> The really hard part about readahead is that you don't know what
> userland wants. In Jeff's test, he's telling the kernel he wants 1MB
> ios and our RA engine is doing 128KB ios.
>
> We can talk about scaling up how big the RA windows get on their own,
> but if userland asks for 1MB, we don't have to worry about futile RA, we
> just have to make sure we don't oom the box trying to honor 1MB reads
> from 5000 different procs.
Right - if we know the read request is larger than the RA window,
then we should ignore the RA window and just service the request in
a single bio. Well, at least, in chunks as large as the underlying
device will allow us to build....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
2012-01-26 16:40 ` Loke, Chetan
2012-01-26 17:00 ` Andreas Dilger
@ 2012-02-03 12:37 ` Wu Fengguang
1 sibling, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2012-02-03 12:37 UTC (permalink / raw)
To: Loke, Chetan
Cc: Andrea Arcangeli, Chris Mason, James Bottomley, Steven Whitehouse,
Andreas Dilger, Jan Kara, Mike Snitzer, linux-scsi, neilb,
dm-devel, Christoph Hellwig, linux-mm, Jeff Moyer, Boaz Harrosh,
linux-fsdevel, lsf-pc, Darrick J.Wong, Dan Magenheimer
On Thu, Jan 26, 2012 at 11:40:47AM -0500, Loke, Chetan wrote:
> > From: Andrea Arcangeli [mailto:aarcange@redhat.com]
> > Sent: January 25, 2012 5:46 PM
>
> ....
>
> > Way more important is to have feedback on the readahead hits and be
> > sure when readahead is raised to the maximum the hit rate is near 100%
> > and fallback to lower readaheads if we don't get that hit rate. But
> > that's not a VM problem and it's a readahead issue only.
> >
>
> A quick google showed up - http://kerneltrap.org/node/6642
>
> Interesting thread to follow. I haven't looked further as to what was
> merged and what wasn't.
>
> A quote from the patch - " It works by peeking into the file cache and
> check if there are any history pages present or accessed."
> Now I don't understand anything about this but I would think digging the
> file-cache isn't needed(?). So, yes, a simple RA hit-rate feedback could
> be fine.
>
> And 'maybe' for adaptive RA just increase the RA-blocks by '1'(or some
> N) over period of time. No more smartness. A simple 10 line function is
> easy to debug/maintain. That is, a scaled-down version of
> ramp-up/ramp-down. Don't go crazy by ramping-up/down after every RA(like
> SCSI LLDD madness). Wait for some event to happen.
>
> I can see where Andrew Morton's concerns could be(just my
> interpretation). We may not want to end up like a protocol state machine
> code: tcp slow-start, then increase , then congestion, then let's
> back-off. hmmm, slow-start is a problem for my business logic, so let's
> speed-up slow-start ;).
Loke,
Thrashing safe readahead can work as simple as:
readahead_size = min(nr_history_pages, MAX_READAHEAD_PAGES)
No need for more slow-start or back-off magics.
This is because nr_history_pages is a lower estimation of the threshing
threshold:
chunk A chunk B chunk C head
l01 l11 l12 l21 l22
| |-->|-->| |------>|-->| |------>|
| +-------+ +-----------+ +-------------+ |
| | # | | # | | # | |
| +-------+ +-----------+ +-------------+ |
| |<==============|<===========================|<============================|
L0 L1 L2
Let f(l) = L be a map from
l: the number of pages read by the stream
to
L: the number of pages pushed into inactive_list in the mean time
then
f(l01) <= L0
f(l11 + l12) = L1
f(l21 + l22) = L2
...
f(l01 + l11 + ...) <= Sum(L0 + L1 + ...)
<= Length(inactive_list) = f(thrashing-threshold)
So the count of continuous history pages left in inactive_list is always a
lower estimation of the true thrashing-threshold. Given a stable workload,
the readahead size will keep ramping up and then stabilize in range
(thrashing_threshold/2, thrashing_threshold)
Thanks,
Fengguang
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2012-02-03 12:38 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20120124151504.GQ4387@shiny>
[not found] ` <20120124165631.GA8941@infradead.org>
[not found] ` <186EA560-1720-4975-AC2F-8C72C4A777A9@dilger.ca>
[not found] ` <x49fwf5kmbl.fsf@segfault.boston.devel.redhat.com>
[not found] ` <20120124184054.GA23227@infradead.org>
[not found] ` <20120124190732.GH4387@shiny>
[not found] ` <x49vco0kj5l.fsf@segfault.boston.devel.redhat.com>
[not found] ` <20120124200932.GB20650@quack.suse.cz>
[not found] ` <x49pqe8kgej.fsf@segfault.boston.devel.redhat.com>
[not found] ` <20120124203936.GC20650@quack.suse.cz>
[not found] ` <20120125032932.GA7150@localhost>
[not found] ` <F6F2DEB8-F096-4A3B-89E3-3A132033BC76@dilger.ca>
[not found] ` <1327502034.2720.23.camel@menhir>
[not found] ` <D3F292ADF945FB49B35E96C94C2061B915A638A6@nsmail.netscout.com>
[not found] ` <1327509623.2720.52.camel@menhir>
2012-01-25 17:32 ` [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics James Bottomley
2012-01-25 18:28 ` Loke, Chetan
2012-01-25 18:37 ` Loke, Chetan
2012-01-25 18:37 ` James Bottomley
2012-01-25 20:06 ` Chris Mason
2012-01-25 22:46 ` Andrea Arcangeli
2012-01-25 22:58 ` Jan Kara
2012-01-26 8:59 ` Boaz Harrosh
2012-01-26 16:40 ` Loke, Chetan
2012-01-26 17:00 ` Andreas Dilger
2012-01-26 17:16 ` Loke, Chetan
2012-02-03 12:37 ` Wu Fengguang
2012-01-26 22:38 ` Dave Chinner
2012-01-26 16:17 ` Loke, Chetan
2012-01-25 18:44 ` Boaz Harrosh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).