From: Yu Kuai <yukuai1@huaweicloud.com>
To: Ali Gholami Rudi <aligrudi@gmail.com>, Yu Kuai <yukuai1@huaweicloud.com>
Cc: Xiao Ni <xni@redhat.com>,
linux-raid@vger.kernel.org, song@kernel.org,
"yukuai (C)" <yukuai3@huawei.com>
Subject: Re: Unacceptably Poor RAID1 Performance with Many CPU Cores
Date: Fri, 16 Jun 2023 17:17:45 +0800 [thread overview]
Message-ID: <5bc8f9bd-2a56-3e80-80de-01f7af24c085@huaweicloud.com> (raw)
In-Reply-To: <20231606122233@laper.mirepesht>
Hi,
在 2023/06/16 16:52, Ali Gholami Rudi 写道:
>
> Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>>> index 4fcfcb350d2b..52f0c24128ff 100644
>>> --- a/drivers/md/raid10.c
>>> +++ b/drivers/md/raid10.c
>>> @@ -905,7 +905,7 @@ static void flush_pending_writes(struct r10conf *conf)
>>> /* flush any pending bitmap writes to disk
>>> * before proceeding w/ I/O */
>>> md_bitmap_unplug(conf->mddev->bitmap);
>>> - wake_up(&conf->wait_barrier);
>>> + wake_up_barrier(conf);
>>>
>>> while (bio) { /* submit pending writes */
>>> struct bio *next = bio->bi_next;
>>
>> Thanks for the testing, sorry that I missed one place... Can you try to
>> change wake_up() to wake_up_barrier() from raid10_unplug() and test
>> again?
>
> OK. I replaced only the second occurrence of wake_up() in raid10_unplug().
I think it's better to change them together.
>
>>> Without the patch:
>>> READ: IOPS=2033k BW=8329MB/s
>>> WRITE: IOPS= 871k BW=3569MB/s
>>>
>>> With the patch:
>>> READ: IOPS=2027K BW=7920MiB/s
>>> WRITE: IOPS= 869K BW=3394MiB/s
>
> With the second patch:
> READ: IOPS=3642K BW=13900MiB/s
> WRITE: IOPS=1561K BW= 6097MiB/s
>
> That is impressive. Great job.
Good, thanks for testing, can you please show perf result as well, I'd
like to check if there are other obvoius bottleneck.
By the way, I think raid1 can definitly benifit from same optimizations,
I'll look into raid1.
Thanks,
Kuai
>
> I shall test it more.
>
> Thanks,
> Ali
>
> .
>
next prev parent reply other threads:[~2023-06-16 9:17 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-15 7:54 Unacceptably Poor RAID1 Performance with Many CPU Cores Ali Gholami Rudi
2023-06-15 9:16 ` Xiao Ni
2023-06-15 17:08 ` Ali Gholami Rudi
2023-06-15 17:36 ` Ali Gholami Rudi
2023-06-16 1:53 ` Xiao Ni
2023-06-16 5:20 ` Ali Gholami Rudi
2023-06-15 14:02 ` Yu Kuai
2023-06-16 2:14 ` Xiao Ni
2023-06-16 2:34 ` Yu Kuai
2023-06-16 5:52 ` Ali Gholami Rudi
[not found] ` <20231606091224@laper.mirepesht>
2023-06-16 7:31 ` Ali Gholami Rudi
2023-06-16 7:42 ` Yu Kuai
2023-06-16 8:21 ` Ali Gholami Rudi
2023-06-16 8:34 ` Yu Kuai
2023-06-16 8:52 ` Ali Gholami Rudi
2023-06-16 9:17 ` Yu Kuai [this message]
2023-06-16 11:51 ` Ali Gholami Rudi
2023-06-16 12:27 ` Yu Kuai
2023-06-18 20:30 ` Ali Gholami Rudi
2023-06-19 1:22 ` Yu Kuai
2023-06-19 5:19 ` Ali Gholami Rudi
2023-06-19 6:53 ` Yu Kuai
2023-06-21 8:05 ` Xiao Ni
2023-06-21 8:26 ` Yu Kuai
2023-06-21 8:55 ` Xiao Ni
2023-07-01 11:17 ` Ali Gholami Rudi
2023-07-03 12:39 ` Yu Kuai
2023-07-05 7:59 ` Ali Gholami Rudi
2023-06-21 19:34 ` Wols Lists
2023-06-23 0:52 ` Xiao Ni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5bc8f9bd-2a56-3e80-80de-01f7af24c085@huaweicloud.com \
--to=yukuai1@huaweicloud.com \
--cc=aligrudi@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
--cc=xni@redhat.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).