From: Wu Fengguang <fengguang.wu@gmail.com>
To: Steven Whitehouse <swhiteho@redhat.com>
Cc: "Loke, Chetan" <Chetan.Loke@netscout.com>,
Andreas Dilger <adilger@dilger.ca>, Jan Kara <jack@suse.cz>,
Jeff Moyer <jmoyer@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
linux-scsi@vger.kernel.org, Mike Snitzer <snitzer@redhat.com>,
neilb@suse.de, Christoph Hellwig <hch@infradead.org>,
dm-devel@redhat.com, Boaz Harrosh <bharrosh@panasas.com>,
linux-fsdevel@vger.kernel.org, lsf-pc@lists.linux-foundation.org,
Chris Mason <chris.mason@oracle.com>,
"Darrick J.Wong" <djwong@us.ibm.com>,
Dan Magenheimer <dan.magenheimer@oracle.com>
Subject: Re: [Lsf-pc] [dm-devel] [LSF/MM TOPIC] a few storage topics
Date: Fri, 3 Feb 2012 20:55:43 +0800 [thread overview]
Message-ID: <20120203125543.GA13410@localhost> (raw)
In-Reply-To: <1327509623.2720.52.camel@menhir>
On Wed, Jan 25, 2012 at 04:40:23PM +0000, Steven Whitehouse wrote:
> Hi,
>
> On Wed, 2012-01-25 at 11:22 -0500, Loke, Chetan wrote:
> > > If the reason for not setting a larger readahead value is just that it
> > > might increase memory pressure and thus decrease performance, is it
> > > possible to use a suitable metric from the VM in order to set the value
> > > automatically according to circumstances?
> > >
> >
> > How about tracking heuristics for 'read-hits from previous read-aheads'? If the hits are in acceptable range(user-configurable knob?) then keep seeking else back-off a little on the read-ahead?
> >
> > > Steve.
> >
> > Chetan Loke
>
> I'd been wondering about something similar to that. The basic scheme
> would be:
>
> - Set a page flag when readahead is performed
> - Clear the flag when the page is read (or on page fault for mmap)
> (i.e. when it is first used after readahead)
>
> Then when the VM scans for pages to eject from cache, check the flag and
> keep an exponential average (probably on a per-cpu basis) of the rate at
> which such flagged pages are ejected. That number can then be used to
> reduce the max readahead value.
>
> The questions are whether this would provide a fast enough reduction in
> readahead size to avoid problems? and whether the extra complication is
> worth it compared with using an overall metric for memory pressure?
>
> There may well be better solutions though,
The caveat is, on a consistently thrashed machine, the readahead size
should better be determined for each read stream.
Repeated readahead thrashing typically happen in a file server with
large number of concurrent clients. For example, if there are 1000
read streams each doing 1MB readahead, since there are 2 readahead
window for each stream, there could be up to 2GB readahead pages that
will sure be thrashed in a server with only 1GB memory.
Typically the 1000 clients will have different read speeds. A few of
them will be doing 1MB/s, most others may be doing 100KB/s. In this
case, we shall only decrease readahead size for the 100KB/s clients.
The 1MB/s clients actually won't see readahead thrashing at all and
we'll want them to do large 1MB I/O to achieve good disk utilization.
So we need something better than the "global feedback" scheme, and we
do have such a solution ;) As said in my other email, the number of
history pages remained in the page cache is a good estimation of that
particular read stream's thrashing safe readahead size.
Thanks,
Fengguang
next prev parent reply other threads:[~2012-02-03 12:55 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CABE8wws67dn0fwhTCs_XqH0g_CxGuT+hPQH9cVFe1xx5t_O9Jw@mail.gmail.com>
2012-01-17 20:06 ` [LSF/MM TOPIC] a few storage topics Mike Snitzer
2012-01-17 21:36 ` [Lsf-pc] " Jan Kara
2012-01-18 22:58 ` Darrick J. Wong
2012-01-18 23:22 ` Jan Kara
2012-01-18 23:42 ` Boaz Harrosh
2012-01-19 9:46 ` Jan Kara
2012-01-19 15:08 ` Andrea Arcangeli
2012-01-19 20:52 ` Jan Kara
2012-01-19 21:39 ` Andrea Arcangeli
2012-01-22 11:31 ` Boaz Harrosh
2012-01-23 16:30 ` Jan Kara
2012-01-22 12:21 ` Boaz Harrosh
2012-01-23 16:18 ` Jan Kara
2012-01-23 17:53 ` Andrea Arcangeli
2012-01-23 18:28 ` Jeff Moyer
2012-01-23 18:56 ` Andrea Arcangeli
2012-01-23 19:19 ` Jeff Moyer
2012-01-24 15:15 ` Chris Mason
2012-01-24 16:56 ` [dm-devel] " Christoph Hellwig
2012-01-24 17:01 ` Andreas Dilger
2012-01-24 17:06 ` [Lsf-pc] [dm-devel] " Andrea Arcangeli
2012-01-24 17:08 ` Chris Mason
2012-01-24 17:08 ` [Lsf-pc] " Andreas Dilger
2012-01-24 18:05 ` [dm-devel] " Jeff Moyer
2012-01-24 18:40 ` Christoph Hellwig
2012-01-24 19:07 ` Chris Mason
2012-01-24 19:14 ` Jeff Moyer
2012-01-24 20:09 ` [Lsf-pc] [dm-devel] " Jan Kara
2012-01-24 20:13 ` [Lsf-pc] " Jeff Moyer
2012-01-24 20:39 ` [Lsf-pc] [dm-devel] " Jan Kara
2012-01-24 20:59 ` Jeff Moyer
2012-01-24 21:08 ` Jan Kara
2012-01-25 3:29 ` Wu Fengguang
2012-01-25 6:15 ` [Lsf-pc] " Andreas Dilger
2012-01-25 6:35 ` [Lsf-pc] [dm-devel] " Wu Fengguang
2012-01-25 14:00 ` Jan Kara
2012-01-26 12:29 ` Andreas Dilger
2012-01-27 17:03 ` Ted Ts'o
2012-01-26 16:25 ` Vivek Goyal
2012-01-26 20:37 ` Jan Kara
2012-01-26 22:34 ` Dave Chinner
2012-01-27 3:27 ` Wu Fengguang
2012-01-27 5:25 ` Andreas Dilger
2012-01-27 7:53 ` Wu Fengguang
2012-01-25 14:33 ` Steven Whitehouse
2012-01-25 14:45 ` Jan Kara
2012-01-25 16:22 ` Loke, Chetan
2012-01-25 16:40 ` Steven Whitehouse
2012-01-25 17:08 ` Loke, Chetan
2012-01-25 17:32 ` James Bottomley
2012-01-25 18:28 ` Loke, Chetan
2012-01-25 18:37 ` Loke, Chetan
2012-01-25 18:37 ` James Bottomley
2012-01-25 20:06 ` Chris Mason
2012-01-25 22:46 ` Andrea Arcangeli
2012-01-25 22:58 ` Jan Kara
2012-01-26 8:59 ` Boaz Harrosh
2012-01-26 16:40 ` Loke, Chetan
2012-01-26 17:00 ` Andreas Dilger
2012-01-26 17:16 ` Loke, Chetan
2012-02-03 12:37 ` Wu Fengguang
2012-01-26 22:38 ` Dave Chinner
2012-01-26 16:17 ` Loke, Chetan
2012-01-25 18:44 ` Boaz Harrosh
2012-02-03 12:55 ` Wu Fengguang [this message]
2012-01-24 19:11 ` [dm-devel] [Lsf-pc] " Jeff Moyer
2012-01-26 22:31 ` Dave Chinner
2012-01-24 17:12 ` Jeff Moyer
2012-01-24 17:32 ` Chris Mason
2012-01-24 18:14 ` Jeff Moyer
2012-01-25 0:23 ` NeilBrown
2012-01-25 6:11 ` Andreas Dilger
2012-01-18 23:39 ` Dan Williams
2012-01-24 17:59 ` Martin K. Petersen
2012-01-24 19:48 ` Douglas Gilbert
2012-01-24 20:04 ` Martin K. Petersen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120203125543.GA13410@localhost \
--to=fengguang.wu@gmail.com \
--cc=Chetan.Loke@netscout.com \
--cc=aarcange@redhat.com \
--cc=adilger@dilger.ca \
--cc=bharrosh@panasas.com \
--cc=chris.mason@oracle.com \
--cc=dan.magenheimer@oracle.com \
--cc=djwong@us.ibm.com \
--cc=dm-devel@redhat.com \
--cc=hch@infradead.org \
--cc=jack@suse.cz \
--cc=jmoyer@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=neilb@suse.de \
--cc=snitzer@redhat.com \
--cc=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).