From: "AVANTIKA R. MATHUR" <mathur@linux.vnet.ibm.com>
To: Jens Axboe <jens.axboe@oracle.com>
Cc: Avantika Mathur <mathur@us.ibm.com>, linux-kernel@vger.kernel.org
Subject: Re: cfq performance gap
Date: Tue, 12 Dec 2006 17:32:43 -0800 [thread overview]
Message-ID: <457F583B.9090109@linux.vnet.ibm.com> (raw)
In-Reply-To: <20061211140845.GL4576@kernel.dk>
Jens Axboe wrote:
> On Fri, Dec 08 2006, Avantika Mathur wrote:
>
>> On Fri, 2006-12-08 at 13:05 +0100, Jens Axboe wrote:
>>
>>> On Thu, Dec 07 2006, Avantika Mathur wrote:
>>>
>>>> Hi Jens,
>>>>
>>> (you probably noticed now, but the axboe@suse.de email is no longer
>>> valid)
>>>
>> I saw that, thanks!
>>
>>>> I've noticed a performance gap between the cfq scheduler and other io
>>>> schedulers when running the rawio benchmark.
>>>>
>>>> The benchmark workload is 16 processes running 4k random reads.
>>>>
>>>> Is this performance gap a known issue?
>>>>
>>> CFQ could be a little slower at this benchmark, but your results are
>>> much worse than I would expect. What is the queueing depth of sda? How
>>> are you invoking rawio?
>>>
>> I am running rawio with the following options:
>> rawread -p 16 -m 1 -d 1 -x -z -t 0 -s 4096
>>
>> The queue depth on sda is 4.
>>
>>> Your runtime is very low, how does it look if you allow the test to run
>>> for much longer? 30MiB/sec random read bandwidth seems very high, I'm
>>> wondering what exactly is being tested here.
>>>
>> rawio is actually performing sequential reads, but I don't believe it is
>> purely sequential with the multiple processes.
>> I am currently running the test with longer runtimes and will post
>> results once it is complete.
>> I've also attached the rawio source.
>>
>
> It's certainly the slice and idling hurting here. But at the same time,
> I don't really think your test case is very interesting. The test area
> is very small and you have 16 threads trying to read the same thing,
> optimizing for that would be silly as I don't think it has much real
> world relevance.
>
>
Could a database have similar workload to this test?
> That said, I might add some logic to detect when we can cheaply switch
> queues instead of waiting for a new request from the same queue.
> Averaging slice times over a period of time instead of 1:1 with that
> logic, should help cases like this while still being fair.
>
Thank you for looking at this issue.
I've found an IBM/SUSE bugzilla bug for the same performance gap on
rawio. There was a fix for this bug included in SLES10-RC1, do you know
why it was not added in mainline?
Thanks again,
Avantika Mathur
next prev parent reply other threads:[~2006-12-13 1:32 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-12-08 0:03 cfq performance gap Avantika Mathur
2006-12-08 12:05 ` Jens Axboe
2006-12-08 22:09 ` Avantika Mathur
2006-12-11 14:08 ` Jens Axboe
2006-12-13 1:32 ` AVANTIKA R. MATHUR [this message]
2006-12-13 5:23 ` Chen, Kenneth W
2006-12-13 9:56 ` Miquel van Smoorenburg
2006-12-13 16:20 ` Chen, Kenneth W
2006-12-13 16:41 ` Miquel van Smoorenburg
2006-12-13 6:52 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=457F583B.9090109@linux.vnet.ibm.com \
--to=mathur@linux.vnet.ibm.com \
--cc=jens.axboe@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathur@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox