From: Jens Axboe <axboe@suse.de>
To: spaminos-ker@yahoo.com
Cc: Andrew Morton <akpm@osdl.org>, linux-kernel@vger.kernel.org
Subject: Re: cfq misbehaving on 2.6.11-1.14_FC3
Date: Fri, 17 Jun 2005 16:10:40 +0200 [thread overview]
Message-ID: <20050617141039.GL6957@suse.de> (raw)
In-Reply-To: <20050614232154.17077.qmail@web30701.mail.mud.yahoo.com>
On Tue, Jun 14 2005, spaminos-ker@yahoo.com wrote:
> --- Andrew Morton <akpm@osdl.org> wrote:
> > > For some reason, doing a "cp" or appending to files is very fast. I suspect
> > > that vi's mmap calls are the reason for the latency problem.
> >
> > Don't know. Try to work out (from vmstat or diskstats) how much reading is
> > going on.
> >
> > Try stracing the check, see if your version of vi is doing a sync() or
> > something odd like that.
>
> The read/write patterns of the background process is about 35% reads.
>
> vi is indeed doing a sync on the open file, and that's where the time
> was spend. So I just changed my test to simply opening a file,
> writing some data in it and calling flush on the fd.
>
> I also reduced the sleep to 1s instead of 1m, and here are the
> results:
>
> cfq: 20,20,21,21,20,22,20,20,18,21 - avg 20.3 noop:
> 12,12,12,13,5,10,10,12,12,13 - avg 11.1 deadline:
> 16,9,16,14,10,6,8,8,15,9 - avg 11.1 as: 6,11,14,11,9,15,16,9,8,9 - avg
> 10.8
>
> As you can see, cfq stands out (and it should stand out the other
> way).
This doesn't look good (or expected) at all. In the initial posting you
mention this being an ide driver - I want to make sure if it's hda or
sata driven (eg sda or similar)?
> > OK, well if the latency is mainly due to reads then one would hope that the
> > anticipatory scheduler would do better than that.
>
> I suspect the latency is due to writes: it seems (and correct me if I
> am wrong) that write requests are enqueued in one giant queue, thus
> the cfq algorithm can not be applied to the requests.
That is correct. Each process has a sync queue associated with it, async
requests like writes go to a per-device async queue. The cost of
tracking who dirtied a given page was too large and not worth it.
Perhaps rmap could be used to lookup who has a specific page mapped...
> But then, why would other i/o schedulers perform better in that case?
Yeah, the global write queue doesn't explain anything, the other
schedulers either share read/write queue or have a seperate single write
queue as well.
I'll try and reproduce (and fix) your problem.
--
Jens Axboe
next prev parent reply other threads:[~2005-06-17 14:09 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-06-10 22:54 cfq misbehaving on 2.6.11-1.14_FC3 spaminos-ker
2005-06-11 9:29 ` Andrew Morton
2005-06-14 2:19 ` spaminos-ker
2005-06-14 7:03 ` Andrew Morton
2005-06-14 23:21 ` spaminos-ker
2005-06-17 14:10 ` Jens Axboe [this message]
2005-06-17 15:51 ` Andrea Arcangeli
2005-06-17 18:16 ` Jens Axboe
2005-06-17 23:01 ` spaminos-ker
2005-06-22 9:24 ` Jens Axboe
2005-06-22 17:54 ` spaminos-ker
2005-06-22 20:43 ` Jens Axboe
2005-06-23 18:30 ` spaminos-ker
2005-06-23 23:33 ` Con Kolivas
2005-06-24 2:33 ` spaminos-ker
2005-06-24 3:27 ` Con Kolivas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050617141039.GL6957@suse.de \
--to=axboe@suse.de \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
--cc=spaminos-ker@yahoo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox