public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* 2.5.50-bk5-wli-1
@ 2002-12-06  8:00 William Lee Irwin III
  2002-12-06  8:24 ` 2.5.50-bk5-wli-1 Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: William Lee Irwin III @ 2002-12-06  8:00 UTC (permalink / raw)
  To: linux-kernel

2.5.50-wli-bk5-1  fix driverfs oops wrt. memblks and nodes
2.5.50-wl-bk5i-2  __do_SAK() pidhash conversion
2.5.50-wli-bk5-3  introduce nr_processes() for proc_fill_super()
2.5.50-wli-bk5-4  hugetlbfs compilefix
2.5.50-wli-bk5-5  capset_set_pg() pidhashing conversion
2.5.50-wli-bk5-6  vm86 fixes
2.5.50-wli-bk5-7  UML get_task() pidhash conversion
2.5.50-wli-bk5-8  i386 discontigmem boot speedup
2.5.50-wli-bk5-9  allocate node-local pgdats for i386 discontigmem
2.5.50-wli-bk5-10 remove __has_stopped_jobs ()
2.5.50-wli-bk5-11 NUMA-Q PCI workarounds
2.5.50-wli-bk5-12 resize inode cache wait table -- 8 is too small

vs. 2.5.50-bk5. Available from:

ftp://ftp.kernel.org/pub/linux/kernel/pub/linux/kernel/people/wli/kernels/2.5.50-bk5-wli-1/

The "theme" of this patchset is basically my pending patch queue, minus
some already-akced-or-included-by-driver-maintainer cleanups. -bk5 is
not included in the supplied patches; apply beforehand.

Bill

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2.5.50-bk5-wli-1
  2002-12-06  8:00 2.5.50-bk5-wli-1 William Lee Irwin III
@ 2002-12-06  8:24 ` Andrew Morton
  2002-12-06  8:52   ` 2.5.50-bk5-wli-1 William Lee Irwin III
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2002-12-06  8:24 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: linux-kernel

William Lee Irwin III wrote:
> 
> 2.5.50-wli-bk5-12 resize inode cache wait table -- 8 is too small

Heh.  I decided to make that really, really, really tiny in the expectation
that if it was _too_ small, someone would notice.

For what workload is the 8 too small, and what is the call path
of the waiters?

(If it is `tiobench 100000000' and the wait is in __writeback_single_inode(),
then we should probably just return from there if !sync and the inode is locked)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2.5.50-bk5-wli-1
  2002-12-06  8:24 ` 2.5.50-bk5-wli-1 Andrew Morton
@ 2002-12-06  8:52   ` William Lee Irwin III
  2002-12-06  9:15     ` 2.5.50-bk5-wli-1 Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: William Lee Irwin III @ 2002-12-06  8:52 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

William Lee Irwin III wrote:
>> 2.5.50-wli-bk5-12 resize inode cache wait table -- 8 is too small

On Fri, Dec 06, 2002 at 12:24:07AM -0800, Andrew Morton wrote:
> Heh.  I decided to make that really, really, really tiny in the expectation
> that if it was _too_ small, someone would notice.
> For what workload is the 8 too small, and what is the call path
> of the waiters?
> (If it is `tiobench 100000000' and the wait is in __writeback_single_inode(),
> then we should probably just return from there if !sync and the inode is locked)

This is actually the result of quite a bit of handwaving; in the OOM-
handling series of patches with the GFP_NOKILL business, I found that
tasks would block exessively in wait_on_inode() (which was tiobench
16384). The qualitative evidence points toward the inode table as a
potential bottleneck in the presence of many tasks and/or cpus. No
specific benchmark numbers apply; in the original series (GFP_NOKILL),
I increased this to much larger than 256. The entire summary of results
of that series of patches was "highmem drops dead under load". But
performance benefits from this minor increase in size should be obvious.

Bill

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2.5.50-bk5-wli-1
  2002-12-06  8:52   ` 2.5.50-bk5-wli-1 William Lee Irwin III
@ 2002-12-06  9:15     ` Andrew Morton
  2002-12-06 14:20       ` 2.5.50-bk5-wli-1 William Lee Irwin III
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2002-12-06  9:15 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: linux-kernel

William Lee Irwin III wrote:
> 
> William Lee Irwin III wrote:
> >> 2.5.50-wli-bk5-12 resize inode cache wait table -- 8 is too small
> 
> On Fri, Dec 06, 2002 at 12:24:07AM -0800, Andrew Morton wrote:
> > Heh.  I decided to make that really, really, really tiny in the expectation
> > that if it was _too_ small, someone would notice.
> > For what workload is the 8 too small, and what is the call path
> > of the waiters?
> > (If it is `tiobench 100000000' and the wait is in __writeback_single_inode(),
> > then we should probably just return from there if !sync and the inode is locked)
> 
> This is actually the result of quite a bit of handwaving; in the OOM-
> handling series of patches with the GFP_NOKILL business, I found that
> tasks would block exessively in wait_on_inode() (which was tiobench
> 16384).

Yup.  I haven't really considered or tested any other strategies
here, but it's part of writer throttling.  If we let these processes
skip an inode which is already under writeback and go on to the next
one there is a risk that we end up submitting IO all over the disk.

Or not.  I have not tried it.

All those threads would end up throttling in get_request_wait() instead.
Which is a single waitqueue.  But it is wake-one.

> The entire summary of results
> of that series of patches was "highmem drops dead under load".

There really is no shame in sending out bugreports, you know.

> But performance benefits from this minor increase in size should be obvious.
> 

But they're all waiting on the same inode ;)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2.5.50-bk5-wli-1
  2002-12-06  9:15     ` 2.5.50-bk5-wli-1 Andrew Morton
@ 2002-12-06 14:20       ` William Lee Irwin III
  0 siblings, 0 replies; 5+ messages in thread
From: William Lee Irwin III @ 2002-12-06 14:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

William Lee Irwin III wrote:
>> This is actually the result of quite a bit of handwaving; in the OOM-
>> handling series of patches with the GFP_NOKILL business, I found that
>> tasks would block exessively in wait_on_inode() (which was tiobench
>> 16384).

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> Yup.  I haven't really considered or tested any other strategies
> here, but it's part of writer throttling.  If we let these processes
> skip an inode which is already under writeback and go on to the next
> one there is a risk that we end up submitting IO all over the disk.
> Or not.  I have not tried it.

Part of the flavor of the experimentation was to stop throttling things
to death and back again and then to death again and then some, so I
wasn't terribly concerned about performance per-se.


On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> All those threads would end up throttling in get_request_wait() instead.
> Which is a single waitqueue.  But it is wake-one.

Well, not entirely, tiobench can only handle 256 or 512 threads in a
since invocation so a number of them were run in parallel, spread
across a bunch of disks and directories.


William Lee Irwin III wrote:
>> The entire summary of results
>> of that series of patches was "highmem drops dead under load".

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> There really is no shame in sending out bugreports, you know.

Ah, but there is! I'm supposed to fix it myself. =)

At any rate, there were more problems floating around beneath it, like
various odd things using contig_page_data (I think buffer.c got fixed)
and OOM handling and data structure proliferation issues arising from
temporary allocations (e.g. poll wait tables, pathname buffers) held
over processes sleeping.


William Lee Irwin III wrote:
>> But performance benefits from this minor increase in size should be obvious.

On Fri, Dec 06, 2002 at 01:15:36AM -0800, Andrew Morton wrote:
> But they're all waiting on the same inode ;)

Well, see the above. It actually did make a difference; I was able to
get a good deal of blockage out by just jacking up the wait table and
request queue size.


Bill

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2002-12-06 14:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-12-06  8:00 2.5.50-bk5-wli-1 William Lee Irwin III
2002-12-06  8:24 ` 2.5.50-bk5-wli-1 Andrew Morton
2002-12-06  8:52   ` 2.5.50-bk5-wli-1 William Lee Irwin III
2002-12-06  9:15     ` 2.5.50-bk5-wli-1 Andrew Morton
2002-12-06 14:20       ` 2.5.50-bk5-wli-1 William Lee Irwin III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox