public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: William Lee Irwin III <wli@holomorphy.com>
To: Andrew Morton <akpm@digeo.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: 2.5.50-bk5-wli-1
Date: Fri, 6 Dec 2002 06:20:24 -0800	[thread overview]
Message-ID: <20021206142024.GP9882@holomorphy.com> (raw)
In-Reply-To: <3DF06AB8.C31E6EFF@digeo.com>

William Lee Irwin III wrote:
>> This is actually the result of quite a bit of handwaving; in the OOM-
>> handling series of patches with the GFP_NOKILL business, I found that
>> tasks would block exessively in wait_on_inode() (which was tiobench
>> 16384).

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> Yup.  I haven't really considered or tested any other strategies
> here, but it's part of writer throttling.  If we let these processes
> skip an inode which is already under writeback and go on to the next
> one there is a risk that we end up submitting IO all over the disk.
> Or not.  I have not tried it.

Part of the flavor of the experimentation was to stop throttling things
to death and back again and then to death again and then some, so I
wasn't terribly concerned about performance per-se.


On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> All those threads would end up throttling in get_request_wait() instead.
> Which is a single waitqueue.  But it is wake-one.

Well, not entirely, tiobench can only handle 256 or 512 threads in a
since invocation so a number of them were run in parallel, spread
across a bunch of disks and directories.


William Lee Irwin III wrote:
>> The entire summary of results
>> of that series of patches was "highmem drops dead under load".

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> There really is no shame in sending out bugreports, you know.

Ah, but there is! I'm supposed to fix it myself. =)

At any rate, there were more problems floating around beneath it, like
various odd things using contig_page_data (I think buffer.c got fixed)
and OOM handling and data structure proliferation issues arising from
temporary allocations (e.g. poll wait tables, pathname buffers) held
over processes sleeping.


William Lee Irwin III wrote:
>> But performance benefits from this minor increase in size should be obvious.

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> But they're all waiting on the same inode ;)

Well, see the above. It actually did make a difference; I was able to
get a good deal of blockage out by just jacking up the wait table and
request queue size.


Bill

      reply	other threads:[~2002-12-06 14:13 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-12-06  8:00 2.5.50-bk5-wli-1 William Lee Irwin III
2002-12-06  8:24 ` 2.5.50-bk5-wli-1 Andrew Morton
2002-12-06  8:52   ` 2.5.50-bk5-wli-1 William Lee Irwin III
2002-12-06  9:15     ` 2.5.50-bk5-wli-1 Andrew Morton
2002-12-06 14:20       ` William Lee Irwin III [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021206142024.GP9882@holomorphy.com \
    --to=wli@holomorphy.com \
    --cc=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox