From: Vladislav Bolkhovitin <vst@vlnb.net>
To: Ronald Moesbergen <intercommit@gmail.com>
Cc: linux-kernel@vger.kernel.org, Wu Fengguang <fengguang.wu@intel.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
Date: Fri, 03 Jul 2009 16:46:03 +0400 [thread overview]
Message-ID: <4A4DFD8B.3030104@vlnb.net> (raw)
In-Reply-To: <a0272b440907030541w1448d378n6fcabbb555d45fcc@mail.gmail.com>
Ronald Moesbergen, on 07/03/2009 04:41 PM wrote:
> 2009/7/3 Vladislav Bolkhovitin <vst@vlnb.net>:
>> Ronald Moesbergen, on 07/03/2009 01:14 PM wrote:
>>>>> OK, now I tend to agree on decreasing max_sectors_kb and increasing
>>>>> read_ahead_kb. But before actually trying to push that idea I'd like
>>>>> to
>>>>> - do more benchmarks
>>>>> - figure out why context readahead didn't help SCST performance
>>>>> (previous traces show that context readahead is submitting perfect
>>>>> large io requests, so I wonder if it's some io scheduler bug)
>>>> Because, as we found out, without your http://lkml.org/lkml/2009/5/21/319
>>>> patch read-ahead was nearly disabled, hence there were no difference
>>>> which
>>>> algorithm was used?
>>>>
>>>> Ronald, can you run the following tests, please? This time with 2 hosts,
>>>> initiator (client) and target (server) connected using 1 Gbps iSCSI. It
>>>> would be the best if on the client vanilla 2.6.29 will be ran, but any
>>>> other
>>>> kernel will be fine as well, only specify which. Blockdev-perftest should
>>>> be
>>>> ran as before in buffered mode, i.e. with "-a" switch.
>>>>
>>>> 1. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 patch with all default settings.
>>>>
>>>> 2. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 patch with default RA size and 64KB
>>>> max_sectors_kb.
>>>>
>>>> 3. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 patch with 2MB RA size and default
>>>> max_sectors_kb.
>>>>
>>>> 4. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 patch with 2MB RA size and 64KB
>>>> max_sectors_kb.
>>>>
>>>> 5. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 patch and with context RA patch. RA
>>>> size
>>>> and max_sectors_kb are default. For your convenience I committed the
>>>> backported context RA patches into the SCST SVN repository.
>>>>
>>>> 6. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 and context RA patches with default RA
>>>> size and 64KB max_sectors_kb.
>>>>
>>>> 7. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA
>>>> size
>>>> and default max_sectors_kb.
>>>>
>>>> 8. All defaults on the client, on the server vanilla 2.6.29 with
>>>> Fengguang's
>>>> http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA
>>>> size
>>>> and 64KB max_sectors_kb.
>>>>
>>>> 9. On the client default RA size and 64KB max_sectors_kb. On the server
>>>> vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and
>>>> context RA patches with 2MB RA size and 64KB max_sectors_kb.
>>>>
>>>> 10. On the client 2MB RA size and default max_sectors_kb. On the server
>>>> vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and
>>>> context RA patches with 2MB RA size and 64KB max_sectors_kb.
>>>>
>>>> 11. On the client 2MB RA size and 64KB max_sectors_kb. On the server
>>>> vanilla
>>>> 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA
>>>> patches with 2MB RA size and 64KB max_sectors_kb.
>>> Ok, done. Performance is pretty bad overall :(
>>>
>>> The kernels I used:
>>> client kernel: 2.6.26-15lenny3 (debian)
>>> server kernel: 2.6.29.5 with blk_dev_run patch
>>>
>>> And I adjusted the blockdev-perftest script to drop caches on both the
>>> server (via ssh) and the client.
>>>
>>> The results:
>>>
>
> ... results ...
>
>> Those are on the server without io_context-2.6.29 and readahead-2.6.29
>> patches applied and with CFQ scheduler, correct?
>
> No. It was done with the readahead patch
> (http://lkml.org/lkml/2009/5/21/319) and the context RA patch
> (starting at test 5) as you requested.
OK, just wanted to clear.
>> Then we see how reorder of requests caused by many I/O threads submitting
>> I/O in separate I/O contexts badly affect performance and no RA, especially
>> with default 128KB RA size, can solve it. Less max_sectors_kb on the client
>> => more requests it sends at once => more reorder on the server => worse
>> throughput. Although, Fengguang, in theory, context RA with 2MB RA size
>> should considerably help it, no?
>
> Wouldn't setting scst_threads to 1 help also in this case?
Let's check it in another time.
>> Ronald, can you perform those tests again with both io_context-2.6.29 and
>> readahead-2.6.29 patches applied on the server, please?
>
> Ok. I only have access to the test systems during the week, so results
> might not be ready before Monday. Are there tests that we can exclude
> to speed things up?
Unfortunately, no. But this isn't urgent at all, so next week is OK.
Thanks,
Vlad
next prev parent reply other threads:[~2009-07-03 12:47 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-29 5:35 [RESEND] [PATCH] readahead:add blk_run_backing_dev Hisashi Hifumi
2009-06-01 0:36 ` Andrew Morton
2009-06-01 1:04 ` Hisashi Hifumi
2009-06-05 15:15 ` Alan D. Brunelle
2009-06-06 14:36 ` KOSAKI Motohiro
2009-06-06 22:45 ` Wu Fengguang
2009-06-18 19:04 ` Andrew Morton
2009-06-20 3:55 ` Wu Fengguang
2009-06-20 12:29 ` Vladislav Bolkhovitin
2009-06-29 9:34 ` Wu Fengguang
2009-06-29 10:26 ` Ronald Moesbergen
2009-06-29 10:55 ` Vladislav Bolkhovitin
2009-06-29 12:54 ` Wu Fengguang
2009-06-29 12:58 ` Bart Van Assche
2009-06-29 13:01 ` Wu Fengguang
2009-06-29 13:04 ` Vladislav Bolkhovitin
2009-06-29 13:13 ` Wu Fengguang
2009-06-29 13:28 ` Wu Fengguang
2009-06-29 14:43 ` Ronald Moesbergen
2009-06-29 14:51 ` Wu Fengguang
2009-06-29 14:56 ` Ronald Moesbergen
2009-06-29 15:37 ` Vladislav Bolkhovitin
2009-06-29 14:00 ` Ronald Moesbergen
2009-06-29 14:21 ` Wu Fengguang
2009-06-29 15:01 ` Wu Fengguang
2009-06-29 15:37 ` Vladislav Bolkhovitin
[not found] ` <20090630010414.GB31418@localhost>
2009-06-30 10:54 ` Vladislav Bolkhovitin
2009-07-01 13:07 ` Ronald Moesbergen
2009-07-01 18:12 ` Vladislav Bolkhovitin
2009-07-03 9:14 ` Ronald Moesbergen
2009-07-03 10:56 ` Vladislav Bolkhovitin
2009-07-03 12:41 ` Ronald Moesbergen
2009-07-03 12:46 ` Vladislav Bolkhovitin [this message]
2009-07-04 15:19 ` Ronald Moesbergen
2009-07-06 11:12 ` Vladislav Bolkhovitin
2009-07-06 14:37 ` Ronald Moesbergen
2009-07-06 17:48 ` Vladislav Bolkhovitin
2009-07-07 6:49 ` Ronald Moesbergen
[not found] ` <4A5395FD.2040507@vlnb.net>
[not found] ` <a0272b440907080149j3eeeb9bat13f942520db059a8@mail.gmail.com>
2009-07-08 12:40 ` Vladislav Bolkhovitin
2009-07-10 6:32 ` Ronald Moesbergen
2009-07-10 8:43 ` Vladislav Bolkhovitin
2009-07-10 9:27 ` Vladislav Bolkhovitin
2009-07-13 12:12 ` Ronald Moesbergen
2009-07-13 12:36 ` Wu Fengguang
2009-07-13 12:47 ` Ronald Moesbergen
2009-07-13 12:52 ` Wu Fengguang
2009-07-14 18:52 ` Vladislav Bolkhovitin
2009-07-15 7:06 ` Wu Fengguang
2009-07-14 18:52 ` Vladislav Bolkhovitin
2009-07-15 6:30 ` Vladislav Bolkhovitin
2009-07-16 7:32 ` Ronald Moesbergen
2009-07-16 10:36 ` Vladislav Bolkhovitin
2009-07-16 14:54 ` Ronald Moesbergen
2009-07-16 16:03 ` Vladislav Bolkhovitin
2009-07-17 14:15 ` Ronald Moesbergen
2009-07-17 18:23 ` Vladislav Bolkhovitin
2009-07-20 7:20 ` Vladislav Bolkhovitin
2009-07-22 8:44 ` Ronald Moesbergen
2009-07-27 13:11 ` Vladislav Bolkhovitin
2009-07-28 9:51 ` Ronald Moesbergen
2009-07-28 19:07 ` Vladislav Bolkhovitin
2009-07-29 12:48 ` Ronald Moesbergen
2009-07-31 18:32 ` Vladislav Bolkhovitin
2009-08-03 9:15 ` Ronald Moesbergen
2009-08-03 9:20 ` Vladislav Bolkhovitin
2009-08-03 11:44 ` Ronald Moesbergen
2009-07-15 20:52 ` Kurt Garloff
2009-07-16 10:38 ` Vladislav Bolkhovitin
2009-06-30 10:22 ` Vladislav Bolkhovitin
2009-06-29 10:55 ` Vladislav Bolkhovitin
2009-06-29 13:00 ` Wu Fengguang
2009-09-22 20:58 ` Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2009-05-22 0:09 [RESEND][PATCH] " Hisashi Hifumi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A4DFD8B.3030104@vlnb.net \
--to=vst@vlnb.net \
--cc=fengguang.wu@intel.com \
--cc=intercommit@gmail.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox