From: Andrea Arcangeli <andrea@suse.de>
To: Robert Cohen <robert.cohen@anu.edu.au>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [BENCH] Problems with IO throughput and fairness with 2.4.10 and 2.4.9-ac15
Date: Fri, 12 Oct 2001 10:24:09 +0200 [thread overview]
Message-ID: <20011012102409.S714@athlon.random> (raw)
In-Reply-To: <3BB31F99.941813DD@anu.edu.au>
In-Reply-To: <3BB31F99.941813DD@anu.edu.au>; from robert.cohen@anu.edu.au on Thu, Sep 27, 2001 at 10:46:17PM +1000
On Thu, Sep 27, 2001 at 10:46:17PM +1000, Robert Cohen wrote:
> Overall, the total throughput is not that bad, but the fact that it
> achieves this by starving clients to let one client at a time proceed is
> completely unacceptable for a file server.
So the problem here is starvation if I understand well.
This one isn't related to the VM, so it's normal that you don't see much
difference among the different VM, it's more likely either related to
netatalk or tcp or I/O elevator.
Anyways you can pretty well rule out the elevator using elvtune -r 1 -w
1 /dev/hd[abcd] and see if the starvation goes away.
> poor throughput that is seen in this test I associate with poor elevator
> performance. If the elevator doesnt group requests enough you get disk
> behaviour like "small read, seek, small read, seek" instead of grouping
> things into large reads or multiple reads between seeks.
If you hear the seeks that's very good for the fairness, making the
elevator even more aggressive could only increase starvation of some
client.
> The problem where one client gets all the bandwidth has to be some kind
> of livelock.
netatalk may be processing the I/O requests not in a fair manner and if
the unfariness is introduced by netatalk no matter what tcp and I/O
subsystem do, we can do nothing to fix it from the kernel side. OTOH you
said that in the "cached" test netatalk was providing a fair
fileserving, but I'd still prefer if you could reproduce without using
netatalk, you can just use a rsh pipe to do the read and writes of the
files over the network for example, it should stress tcp and I/O
subsystem the same way. If you can't reproduce with rsh please file a
report to the netatalk people.
I doubt it's the tcp congestion control, of course it's unfair too
across multiple streams but I wouldn't expect it to generate that bad
fariness results.
> that they are just doing 8k reads and writes. The files are not opened
> O_SYNC and the file server process arent doing any fsync calls. This is
ok.
> supported by the fact that the performance is fine with 256 Megs of
> memory.
yes.
Andrea
prev parent reply other threads:[~2001-10-12 8:24 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-09-27 12:46 [BENCH] Problems with IO throughput and fairness with 2.4.10 and 2.4.9-ac15 Robert Cohen
2001-09-28 8:26 ` Stephan von Krawczynski
2001-09-28 9:00 ` linux-2.4.9-ac15 and -ac16 compile error Zakhar Kirpichenko
2001-09-28 10:02 ` Keith Owens
2001-09-28 8:51 ` [BENCH] Problems with IO throughput and fairness with 2.4.10 and 2.4.9-ac15 Gerold Jury
2001-09-28 10:27 ` Andrey Nekrasov
2001-09-28 12:48 ` Gerold Jury
2001-09-28 15:22 ` Steve Lord
2001-09-28 17:58 ` Steve Lord
2001-09-29 14:13 ` Gerold Jury
2001-10-12 8:24 ` Andrea Arcangeli [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20011012102409.S714@athlon.random \
--to=andrea@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=robert.cohen@anu.edu.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox