From: Jens Axboe <axboe@suse.de>
To: "Prakash K. Cheemplavam" <prakashkc@gmx.de>
Cc: Andrew Morton <akpm@osdl.org>,
Linux Kernel <linux-kernel@vger.kernel.org>,
nickpiggin@yahoo.com.au
Subject: Re: Time sliced CFQ io scheduler
Date: Fri, 3 Dec 2004 12:29:14 +0100 [thread overview]
Message-ID: <20041203112914.GM10492@suse.de> (raw)
In-Reply-To: <41B04D8A.7060707@gmx.de>
On Fri, Dec 03 2004, Prakash K. Cheemplavam wrote:
> Jens Axboe schrieb:
> >On Fri, Dec 03 2004, Prakash K. Cheemplavam wrote:
> >
> >>>But at least this patch lets you set slice_sync and slice_async
> >>>seperately, if you want to experiement.
> >>
> >>An idea, which values I should try?
> >
> >
> >Just see if the default ones work (or how they work :-)
> >
> >>BTW, I just did my little test on the ide drive and it shows the same
> >>problem, so it is not sata / libata related.
> >
> >
> >Single read/writer case works fine here for me, about half the bandwidth
> >for each. Please show some vmstats for this case, too. Right now I'm not
> >terribly interested in problems with raid alone, as I can poke holes in
> >that. If the single drive case is correct, then we can focus on raid.
>
> I have not enough space to perform this test on the ide drive, so I did
> it on the sata (single disk). The patch doesn't seem to be better. (But
> on the other hand I haven't tested you first version on single disk.) At
> least it still doesn't look good enough in my eyes.
>
> procs -----------memory---------- ---swap-- -----io---- --system--
> ----cpu----
> r b swpd free buff cache si so bi bo in cs us
> sy id wa
> 1 3 2704 5368 1528 906540 0 4 2176 24068 1245 743 0
> 7 0 93
> 0 3 2704 5432 1532 906252 0 0 5072 28160 1277 782 1
> 8 0 91
> 0 5 2704 5688 1532 906080 0 0 9280 4524 1309 842 1
> 10 0 89
> 1 3 2704 5232 1544 906208 0 0 6404 76388 1285 716 1
> 14 0 85
> 0 3 2704 5496 1544 906524 0 0 8328 26624 1301 856 1
> 8 0 91
> 0 3 2704 5512 1528 906636 0 0 9484 22016 1302 883 1
> 8 0 91
> 0 3 2704 5816 1500 906296 0 0 5508 10288 1270 749 1
> 9 0 90
> 0 4 2704 5620 1488 906608 0 0 3076 19920 1267 818 0
> 13 0 87
> 1 4 2704 5684 1456 906432 0 0 3204 18432 1252 704 1
> 8 0 91
> 1 3 2704 5504 1408 906168 0 0 5252 28672 1279 777 1
> 14 0 85
> 0 4 2704 5120 1404 906296 0 0 8968 16384 1351 876 1
> 9 0 90
> 0 4 2704 5364 1404 905620 0 0 5252 26112 1339 835 1
> 14 0 85
> 0 4 2704 5600 1432 905036 0 0 1468 15876 1312 741 2
> 8 0 90
> 1 4 2704 5556 1424 904704 0 0 1664 26112 1243 714 1
> 10 0 89
> 0 4 2704 5492 1428 904100 0 0 1412 31232 1253 760 1
> 15 0 84
> 0 4 2704 5568 1432 903456 0 0 1668 29696 1253 703 1
> 14 0 85
> 1 4 2704 5620 1408 902980 0 0 1280 28672 1248 732 0
> 14 0 86
> 0 4 2704 5236 1404 902888 0 0 2180 28704 1252 705 1
> 11 0 88
> 0 4 2704 5632 1388 902180 0 0 1536 28160 1251 731 1
> 11 0 88
> 0 3 2704 5120 1356 905968 0 0 384 57896 1257 751 1
> 14 0 85
Try increasing slice_sync and idle, just for fun.
> What I don't like about the time sliced cfq (first version as well) is
> that I don't get good sustained rate anymore if I have only one writer
> and nothing else. IIRC with plain cfq I at least got near to maximum
> throughput (40-50mb/sec) now it oscillates much more. I have to recheck
> with plain cfq though. It might be ext3 related...
>
> 0 2 2684 7016 9384 900664 0 0 0 59128 1217 576 1
> 7 0 92
> 1 1 2684 5160 9368 898660 0 0 0 12300 1239 4861 1
> 60 0 39
> 0 3 2684 5532 9364 896360 0 0 0 18684 1246 1723 1
> 48 0 51
> 0 3 2684 5596 9364 896616 0 0 0 24576 1246 686 1
That's a bug, I've noticed that too. Sustained write rate for a single
thread is somewhat lower than it should be, it's on my todo to
investigate.
--
Jens Axboe
next prev parent reply other threads:[~2004-12-03 11:29 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-12-02 13:04 Time sliced CFQ io scheduler Jens Axboe
2004-12-02 13:48 ` Jens Axboe
2004-12-02 19:48 ` Andrew Morton
2004-12-02 19:52 ` Jens Axboe
2004-12-02 20:19 ` Andrew Morton
2004-12-02 20:19 ` Jens Axboe
2004-12-02 20:34 ` Andrew Morton
2004-12-02 20:37 ` Jens Axboe
2004-12-07 23:11 ` Nick Piggin
2004-12-02 22:18 ` Prakash K. Cheemplavam
2004-12-03 7:01 ` Jens Axboe
2004-12-03 9:12 ` Prakash K. Cheemplavam
2004-12-03 9:18 ` Jens Axboe
2004-12-03 9:35 ` Prakash K. Cheemplavam
2004-12-03 9:43 ` Jens Axboe
2004-12-03 9:26 ` Andrew Morton
2004-12-03 9:34 ` Prakash K. Cheemplavam
2004-12-03 9:39 ` Jens Axboe
2004-12-03 9:54 ` Prakash K. Cheemplavam
[not found] ` <41B03722.5090001@gmx.de>
2004-12-03 10:31 ` Jens Axboe
2004-12-03 10:38 ` Jens Axboe
2004-12-03 10:45 ` Prakash K. Cheemplavam
2004-12-03 10:48 ` Jens Axboe
2004-12-03 11:27 ` Prakash K. Cheemplavam
2004-12-03 11:29 ` Jens Axboe [this message]
2004-12-03 11:52 ` Prakash K. Cheemplavam
2004-12-08 0:37 ` Andrea Arcangeli
2004-12-08 0:54 ` Nick Piggin
2004-12-08 1:37 ` Andrea Arcangeli
2004-12-08 1:47 ` Nick Piggin
2004-12-08 2:09 ` Andrea Arcangeli
2004-12-08 2:11 ` Andrew Morton
2004-12-08 2:22 ` Andrea Arcangeli
2004-12-08 6:52 ` Jens Axboe
2004-12-08 2:00 ` Andrew Morton
2004-12-08 2:08 ` Andrew Morton
2004-12-08 6:55 ` Jens Axboe
2004-12-08 2:20 ` Andrea Arcangeli
2004-12-08 2:25 ` Andrew Morton
2004-12-08 2:33 ` Andrea Arcangeli
2004-12-08 2:33 ` Nick Piggin
2004-12-08 2:51 ` Andrea Arcangeli
2004-12-08 3:02 ` Nick Piggin
2004-12-08 6:58 ` Jens Axboe
2004-12-08 7:14 ` Nick Piggin
2004-12-08 7:20 ` Jens Axboe
2004-12-08 7:29 ` Nick Piggin
2004-12-08 7:32 ` Jens Axboe
2004-12-08 7:30 ` Andrew Morton
2004-12-08 7:36 ` Jens Axboe
2004-12-08 13:48 ` Jens Axboe
2004-12-08 6:55 ` Jens Axboe
2004-12-08 7:08 ` Nick Piggin
2004-12-08 7:11 ` Jens Axboe
2004-12-08 7:19 ` Nick Piggin
2004-12-08 7:26 ` Jens Axboe
2004-12-08 9:35 ` Jens Axboe
2004-12-08 10:08 ` Jens Axboe
2004-12-08 12:47 ` Jens Axboe
2004-12-08 10:52 ` Helge Hafting
2004-12-08 10:49 ` Jens Axboe
2004-12-08 6:49 ` Jens Axboe
2004-12-02 14:28 ` Giuliano Pochini
2004-12-02 14:41 ` Jens Axboe
2004-12-04 13:05 ` Giuliano Pochini
-- strict thread matches above, loose matches on Subject: below --
2004-12-03 20:52 Chuck Ebbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20041203112914.GM10492@suse.de \
--to=axboe@suse.de \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
--cc=prakashkc@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox