public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@suse.de>
To: spaminos-ker@yahoo.com
Cc: Andrew Morton <akpm@osdl.org>, linux-kernel@vger.kernel.org
Subject: Re: cfq misbehaving on 2.6.11-1.14_FC3
Date: Wed, 22 Jun 2005 11:24:44 +0200	[thread overview]
Message-ID: <1119432285.3257.5.camel@linux> (raw)
In-Reply-To: <20050617230130.59874.qmail@web30702.mail.mud.yahoo.com>

On Fri, 2005-06-17 at 16:01 -0700, spaminos-ker@yahoo.com wrote:
> I don't know how all this works, but would there be a way to slow down the
> offending writer by not allowing too many pending write requests per process?
> Is there a tunable for the size of the write queue for a given device?
> Reducing it will reduce the throughput, but the latency as well.

The 2.4 SUSE kernel actually has something in place to limit in-flight
write requests against a single device. cfq will already limit the
number of write requests you can have in-flight against a single queue,
but it's request based and not size based.

> Of course, there has to be a way to get this to work right.
> 
> To go back to high latencies, maybe a different problem (but at least closely
> related):
> 
> If I start in the background the command
> dd if=/dev/zero of=/tmp/somefile2 bs=1024
> 
> and then run my test program in a loop, with
> while true ; do time ./io 1; sleep 1s ; done
> 
> I get:
> 
> cfq: 47,33,27,48,32,29,26,49,25,47 -> 36.3 avg
> deadline: 32,28,52,33,35,29,49,39,40,33 -> 37 avg
> noop: 62,47,57,39,59,44,56,49,57,47 -> 51.7 avg
> 
> Now, cfq doesn't behave worst than the others, like expected (now, why it
> behaved worst with the real daemons, I don't know).
> Still > 30 seconds has to be improved for cfq.

THe problem here is that cfq  (and the other io schedulers) still
consider the io async even if fsync() ends up waiting for it to
complete. So there's no real QOS being applied to these pending writes,
and I don't immediately see how we can improve that situation right now.

What file system are you using? I ran your test on ext2, and it didn't
give me more than ~2 seconds latency for the fsync. Tried reiserfs now,
and it's in the 23-24 range.

-- 
Jens Axboe <axboe@suse.de>


  reply	other threads:[~2005-06-22  9:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-06-10 22:54 cfq misbehaving on 2.6.11-1.14_FC3 spaminos-ker
2005-06-11  9:29 ` Andrew Morton
2005-06-14  2:19   ` spaminos-ker
2005-06-14  7:03     ` Andrew Morton
2005-06-14 23:21       ` spaminos-ker
2005-06-17 14:10         ` Jens Axboe
2005-06-17 15:51           ` Andrea Arcangeli
2005-06-17 18:16             ` Jens Axboe
2005-06-17 23:01           ` spaminos-ker
2005-06-22  9:24             ` Jens Axboe [this message]
2005-06-22 17:54               ` spaminos-ker
2005-06-22 20:43                 ` Jens Axboe
2005-06-23 18:30                   ` spaminos-ker
2005-06-23 23:33                     ` Con Kolivas
2005-06-24  2:33                       ` spaminos-ker
2005-06-24  3:27                         ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1119432285.3257.5.camel@linux \
    --to=axboe@suse.de \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=spaminos-ker@yahoo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox