From: Xiao Ni <xni@redhat.com>
To: Ali Gholami Rudi <aligrudi@gmail.com>
Cc: linux-raid@vger.kernel.org, song@kernel.org
Subject: Re: Unacceptably Poor RAID1 Performance with Many CPU Cores
Date: Fri, 16 Jun 2023 09:53:41 +0800 [thread overview]
Message-ID: <CALTww2-HamETu5UppBiz079PZUP+rDRtQkaRA+03=s3wSQGRKA@mail.gmail.com> (raw)
In-Reply-To: <20231506210600@laper.mirepesht>
On Fri, Jun 16, 2023 at 1:38 AM Ali Gholami Rudi <aligrudi@gmail.com> wrote:
>
>
> Ali Gholami Rudi <aligrudi@gmail.com> wrote:
> > Xiao Ni <xni@redhat.com> wrote:
> > > Because it can be reproduced easily in your environment. Can you try
> > > with the latest upstream kernel? If the problem doesn't exist with
> > > latest upstream kernel. You can use git bisect to find which patch can
> > > fix this problem.
> >
> > I just tried the upstream. I get almost the same result with 1G ramdisks.
> >
> > Without RAID (writing to /dev/ram0)
> > READ: IOPS=15.8M BW=60.3GiB/s
> > WRITE: IOPS= 6.8M BW=27.7GiB/s
> >
> > RAID1 (writing to /dev/md/test)
> > READ: IOPS=518K BW=2028MiB/s
> > WRITE: IOPS=222K BW= 912MiB/s
Hi Ali
I can reproduce this with upstream kernel too.
RAID1
READ: bw=3699MiB/s (3879MB/s)
WRITE: bw=1586MiB/s (1663MB/s)
ram disk:
READ: bw=5720MiB/s (5997MB/s)
WRITE: bw=2451MiB/s (2570MB/s)
There is a performance problem. But not like your result. Your result
has a huge gap. I'm not sure the reason. Any thoughts?
>
> And this is perf's output:
I'm not familiar with perf, what's your command that I can use to see
the same output?
Regards
Xiao
>
> + 98.73% 0.01% fio [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
> + 98.63% 0.01% fio [kernel.kallsyms] [k] do_syscall_64
> + 97.28% 0.01% fio [kernel.kallsyms] [k] __x64_sys_io_submit
> - 97.09% 0.01% fio [kernel.kallsyms] [k] io_submit_one
> - 97.08% io_submit_one
> - 53.58% aio_write
> - 53.42% blkdev_write_iter
> - 35.28% blk_finish_plug
> - flush_plug_callbacks
> - 35.27% raid1_unplug
> - flush_bio_list
> - 17.88% submit_bio_noacct_nocheck
> - 17.88% __submit_bio
> - 17.61% raid1_end_write_request
> - 17.47% raid_end_bio_io
> - 17.41% __wake_up_common_lock
> - 17.38% _raw_spin_lock_irqsave
> native_queued_spin_lock_slowpath
> - 17.35% __wake_up_common_lock
> - 17.31% _raw_spin_lock_irqsave
> native_queued_spin_lock_slowpath
> + 18.07% __generic_file_write_iter
> - 43.00% aio_read
> - 42.64% blkdev_read_iter
> - 42.37% __blkdev_direct_IO_async
> - 41.40% submit_bio_noacct_nocheck
> - 41.34% __submit_bio
> - 40.68% raid1_end_read_request
> - 40.55% raid_end_bio_io
> - 40.35% __wake_up_common_lock
> - 40.28% _raw_spin_lock_irqsave
> native_queued_spin_lock_slowpath
> + 95.19% 0.32% fio fio [.] thread_main
> + 95.08% 0.00% fio [unknown] [.] 0xffffffffffffffff
> + 95.03% 0.00% fio fio [.] run_threads
> + 94.77% 0.00% fio fio [.] do_io (inlined)
> + 94.65% 0.16% fio fio [.] td_io_queue
> + 94.65% 0.11% fio libc-2.31.so [.] syscall
> + 94.54% 0.07% fio fio [.] fio_libaio_commit
> + 94.53% 0.05% fio fio [.] td_io_commit
> + 94.50% 0.00% fio fio [.] io_u_submit (inlined)
> + 94.47% 0.04% fio libaio.so.1.0.1 [.] io_submit
> + 92.48% 0.02% fio [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> + 92.48% 0.00% fio [kernel.kallsyms] [k] __wake_up_common_lock
> + 92.46% 92.32% fio [kernel.kallsyms] [k] native_queued_spin_lock_slowpath
> + 76.85% 0.03% fio [kernel.kallsyms] [k] submit_bio_noacct_nocheck
> + 76.76% 0.00% fio [kernel.kallsyms] [k] __submit_bio
> + 60.25% 0.06% fio [kernel.kallsyms] [k] __blkdev_direct_IO_async
> + 58.12% 0.11% fio [kernel.kallsyms] [k] raid_end_bio_io
> ..
>
> Thanks,
> Ali
>
next prev parent reply other threads:[~2023-06-16 1:54 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-15 7:54 Unacceptably Poor RAID1 Performance with Many CPU Cores Ali Gholami Rudi
2023-06-15 9:16 ` Xiao Ni
2023-06-15 17:08 ` Ali Gholami Rudi
2023-06-15 17:36 ` Ali Gholami Rudi
2023-06-16 1:53 ` Xiao Ni [this message]
2023-06-16 5:20 ` Ali Gholami Rudi
2023-06-15 14:02 ` Yu Kuai
2023-06-16 2:14 ` Xiao Ni
2023-06-16 2:34 ` Yu Kuai
2023-06-16 5:52 ` Ali Gholami Rudi
[not found] ` <20231606091224@laper.mirepesht>
2023-06-16 7:31 ` Ali Gholami Rudi
2023-06-16 7:42 ` Yu Kuai
2023-06-16 8:21 ` Ali Gholami Rudi
2023-06-16 8:34 ` Yu Kuai
2023-06-16 8:52 ` Ali Gholami Rudi
2023-06-16 9:17 ` Yu Kuai
2023-06-16 11:51 ` Ali Gholami Rudi
2023-06-16 12:27 ` Yu Kuai
2023-06-18 20:30 ` Ali Gholami Rudi
2023-06-19 1:22 ` Yu Kuai
2023-06-19 5:19 ` Ali Gholami Rudi
2023-06-19 6:53 ` Yu Kuai
2023-06-21 8:05 ` Xiao Ni
2023-06-21 8:26 ` Yu Kuai
2023-06-21 8:55 ` Xiao Ni
2023-07-01 11:17 ` Ali Gholami Rudi
2023-07-03 12:39 ` Yu Kuai
2023-07-05 7:59 ` Ali Gholami Rudi
2023-06-21 19:34 ` Wols Lists
2023-06-23 0:52 ` Xiao Ni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALTww2-HamETu5UppBiz079PZUP+rDRtQkaRA+03=s3wSQGRKA@mail.gmail.com' \
--to=xni@redhat.com \
--cc=aligrudi@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).