From: Ali Gholami Rudi <aligrudi@gmail.com>
To: Yu Kuai <yukuai1@huaweicloud.com>
Cc: Xiao Ni <xni@redhat.com>,
linux-raid@vger.kernel.org, song@kernel.org,
"yukuai (C)" <yukuai3@huawei.com>
Subject: Re: Unacceptably Poor RAID1 Performance with Many CPU Cores
Date: Fri, 16 Jun 2023 11:01:34 +0330 [thread overview]
Message-ID: <20231606110134@laper.mirepesht> (raw)
In-Reply-To: <20231606091224@laper.mirepesht>
Ali Gholami Rudi <aligrudi@gmail.com> wrote:
> Xiao Ni <xni@redhat.com> wrote:
> > On Thu, Jun 15, 2023 at 10:06 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > > This looks familiar... Perhaps can you try to test with raid10 with
> > > latest mainline kernel? I used to optimize spin_lock for raid10, and I
> > > don't do this for raid1 yet... I can try to do the same thing for raid1
> > > if it's valuable.
>
> I do get improvements with raid10:
>
> Without RAID (writing to /dev/ram0)
> READ: IOPS=15.8M BW=60.3GiB/s
> WRITE: IOPS= 6.8M BW=27.7GiB/s
>
> RAID1 (writing to /dev/md/test)
> READ: IOPS=518K BW=2028MiB/s
> WRITE: IOPS=222K BW= 912MiB/s
>
> RAID10 (writing to /dev/md/test)
> READ: IOPS=2033k BW=8329MB/s
> WRITE: IOPS= 871k BW=3569MB/s
>
> raid10 is about four times faster than raid1.
And this is perf's output for raid10:
+ 97.33% 0.04% fio [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
+ 96.96% 0.02% fio [kernel.kallsyms] [k] do_syscall_64
+ 94.43% 0.03% fio [kernel.kallsyms] [k] __x64_sys_io_submit
- 93.71% 0.04% fio [kernel.kallsyms] [k] io_submit_one
- 93.67% io_submit_one
- 76.03% aio_write
- 75.53% blkdev_write_iter
- 68.95% blk_finish_plug
- flush_plug_callbacks
- 68.93% raid10_unplug
- 64.31% __wake_up_common_lock
- 64.17% _raw_spin_lock_irqsave
native_queued_spin_lock_slowpath
- 4.43% submit_bio_noacct_nocheck
- 4.42% __submit_bio
- 2.28% raid10_end_write_request
- 0.82% raid_end_bio_io
0.82% allow_barrier
2.09% brd_submit_bio
- 6.41% __generic_file_write_iter
- 6.08% generic_file_direct_write
- 5.64% __blkdev_direct_IO_async
- 4.72% submit_bio_noacct_nocheck
- 4.69% __submit_bio
- 4.67% md_handle_request
- 4.66% raid10_make_request
2.59% raid10_write_one_disk
- 16.14% aio_read
- 15.07% blkdev_read_iter
- 14.16% __blkdev_direct_IO_async
- 11.36% submit_bio_noacct_nocheck
- 11.17% __submit_bio
- 5.89% md_handle_request
- 5.84% raid10_make_request
+ 4.18% raid10_read_request
- 3.74% raid10_end_read_request
- 2.04% raid_end_bio_io
1.46% allow_barrier
0.55% mempool_free
1.39% brd_submit_bio
- 1.33% bio_iov_iter_get_pages
- 1.00% iov_iter_get_pages
- __iov_iter_get_pages_alloc
- 0.85% get_user_pages_fast
0.75% internal_get_user_pages_fast
0.93% bio_alloc_bioset
0.65% filemap_write_and_wait_range
+ 88.31% 0.86% fio fio [.] thread_main
+ 87.69% 0.00% fio [unknown] [k] 0xffffffffffffffff
+ 87.60% 0.00% fio fio [.] run_threads
+ 87.31% 0.00% fio fio [.] do_io (inlined)
+ 86.60% 0.32% fio libc-2.31.so [.] syscall
+ 85.87% 0.52% fio fio [.] td_io_queue
+ 85.49% 0.18% fio fio [.] fio_libaio_commit
+ 85.45% 0.14% fio fio [.] td_io_commit
+ 85.37% 0.11% fio libaio.so.1.0.1 [.] io_submit
+ 85.35% 0.00% fio fio [.] io_u_submit (inlined)
+ 76.04% 0.01% fio [kernel.kallsyms] [k] aio_write
+ 75.54% 0.01% fio [kernel.kallsyms] [k] blkdev_write_iter
+ 68.96% 0.00% fio [kernel.kallsyms] [k] blk_finish_plug
+ 68.95% 0.00% fio [kernel.kallsyms] [k] flush_plug_callbacks
+ 68.94% 0.13% fio [kernel.kallsyms] [k] raid10_unplug
+ 64.41% 0.03% fio [kernel.kallsyms] [k] _raw_spin_lock_irqsave
+ 64.32% 0.01% fio [kernel.kallsyms] [k] __wake_up_common_lock
+ 64.05% 63.85% fio [kernel.kallsyms] [k] native_queued_spin_lock_slowpath
+ 21.05% 1.51% fio [kernel.kallsyms] [k] submit_bio_noacct_nocheck
+ 20.97% 1.18% fio [kernel.kallsyms] [k] __blkdev_direct_IO_async
+ 20.29% 0.03% fio [kernel.kallsyms] [k] __submit_bio
+ 16.15% 0.02% fio [kernel.kallsyms] [k] aio_read
..
Thanks,
Ali
next prev parent reply other threads:[~2023-06-16 7:35 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-15 7:54 Unacceptably Poor RAID1 Performance with Many CPU Cores Ali Gholami Rudi
2023-06-15 9:16 ` Xiao Ni
2023-06-15 17:08 ` Ali Gholami Rudi
2023-06-15 17:36 ` Ali Gholami Rudi
2023-06-16 1:53 ` Xiao Ni
2023-06-16 5:20 ` Ali Gholami Rudi
2023-06-15 14:02 ` Yu Kuai
2023-06-16 2:14 ` Xiao Ni
2023-06-16 2:34 ` Yu Kuai
2023-06-16 5:52 ` Ali Gholami Rudi
[not found] ` <20231606091224@laper.mirepesht>
2023-06-16 7:31 ` Ali Gholami Rudi [this message]
2023-06-16 7:42 ` Yu Kuai
2023-06-16 8:21 ` Ali Gholami Rudi
2023-06-16 8:34 ` Yu Kuai
2023-06-16 8:52 ` Ali Gholami Rudi
2023-06-16 9:17 ` Yu Kuai
2023-06-16 11:51 ` Ali Gholami Rudi
2023-06-16 12:27 ` Yu Kuai
2023-06-18 20:30 ` Ali Gholami Rudi
2023-06-19 1:22 ` Yu Kuai
2023-06-19 5:19 ` Ali Gholami Rudi
2023-06-19 6:53 ` Yu Kuai
2023-06-21 8:05 ` Xiao Ni
2023-06-21 8:26 ` Yu Kuai
2023-06-21 8:55 ` Xiao Ni
2023-07-01 11:17 ` Ali Gholami Rudi
2023-07-03 12:39 ` Yu Kuai
2023-07-05 7:59 ` Ali Gholami Rudi
2023-06-21 19:34 ` Wols Lists
2023-06-23 0:52 ` Xiao Ni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231606110134@laper.mirepesht \
--to=aligrudi@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
--cc=xni@redhat.com \
--cc=yukuai1@huaweicloud.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).