From: Andrew Morton <akpm@zip.com.au>
To: Hanna Linder <hannal@us.ibm.com>
Cc: lkml <linux-kernel@vger.kernel.org>
Subject: Re: [CFT] delayed allocation and multipage I/O patches for 2.5.6.
Date: Mon, 18 Mar 2002 12:14:27 -0800 [thread overview]
Message-ID: <3C964AA3.4B85EA0B@zip.com.au> (raw)
In-Reply-To: <3C8D9999.83F991DB@zip.com.au> <7820000.1016478976@w-hlinder.des>
Hanna Linder wrote:
>
> --On Monday, March 11, 2002 22:00:57 -0800 Andrew Morton <akpm@zip.com.au> wrote:
>
> > "help, help - there's no point in just one guy testing this" (thanks Randy).
>
> Will you accept the testing of a gal? ;)
With alacrity :) Thanks.
> >
> > This is an update of the delayed-allocation and multipage pagecache I/O
> > patches. I'm calling this a beta, because it all works, and I have
> > other stuff to do for a while.
> >
>
> Here are the dbench throughput results on an 8-way SMP with 2GB memory.
> These are run with 64 then 128 clients 15 times each averaged. It looked
> pretty good.
> Running with more than 180 clients caused the system to hang, after
> a reset there was much filesystem corruption. This happened twice. Probably
> related to filling up disk space.
It could be space-related. A couple of gigs should have been plenty..
One other possible explanation is to do with radix-tree pagecache.
It has to allocate memory to add nodes to the tree. When these
allocations start failing due to out-of-memory, the VM will keep
on calling swap_out() a trillion times without noticing that it
didn't work out. But if this happened, yo would have seen a huge
number of "0-order allocation failed" messages.
> There are no ServerRaid drivers for 2.5 yet
> so the biggest disks on this system are unusable. lockmeter results are forthcoming (day or two).
>
> Running dbench on an 8-way SMP 15 times each.
>
> 2.5.6 clean
>
> Clients Avg
>
> 64 37.9821
> 128 29.8258
>
> 2.5.6 with everything.patch
>
> Clients Avg
>
> 64 41.0204
> 128 30.6431
>
That's odd. I'm showing 50% increases in dbench throughput. Not
that there's anything particularly clever about that - these patches
allow the kernel to just throw more memory in dbench's direction, and
it likes that. But it does indicate that something funny is up.
I'll take a closer look - thanks again.
-
next prev parent reply other threads:[~2002-03-18 20:16 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-03-12 6:00 [CFT] delayed allocation and multipage I/O patches for 2.5.6 Andrew Morton
2002-03-12 11:18 ` Daniel Phillips
2002-03-12 20:29 ` Andrew Morton
2002-03-12 20:40 ` Daniel Phillips
2002-03-12 11:39 ` Daniel Phillips
2002-03-12 21:00 ` Andrew Morton
2002-03-13 11:58 ` Daniel Phillips
2002-03-13 19:50 ` Andrew Morton
2002-03-13 21:51 ` Mike Fedyk
2002-03-14 11:59 ` Daniel Phillips
2002-03-13 0:42 ` David Woodhouse
2002-03-18 19:16 ` Hanna Linder
2002-03-18 20:14 ` Andrew Morton [this message]
2002-03-18 20:22 ` Hanna Linder
2002-03-18 20:49 ` Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2002-03-19 0:41 rwhron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3C964AA3.4B85EA0B@zip.com.au \
--to=akpm@zip.com.au \
--cc=hannal@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox