From: Ali Gholami Rudi <aligrudi@gmail.com>
To: Xiao Ni <xni@redhat.com>
Cc: linux-raid@vger.kernel.org, song@kernel.org
Subject: Re: Unacceptably Poor RAID1 Performance with Many CPU Cores
Date: Thu, 15 Jun 2023 20:38:32 +0330 [thread overview]
Message-ID: <20231506203832@laper.mirepesht> (raw)
In-Reply-To: <CALTww29UZ+WewVrvFDSpONqTHY=TR-Q7tobdRrhsTtXKtXvOBg@mail.gmail.com>
Xiao Ni <xni@redhat.com> wrote:
> Because it can be reproduced easily in your environment. Can you try
> with the latest upstream kernel? If the problem doesn't exist with
> latest upstream kernel. You can use git bisect to find which patch can
> fix this problem.
I just tried the upstream. I get almost the same result with 1G ramdisks.
Without RAID (writing to /dev/ram0)
READ: IOPS=15.8M BW=60.3GiB/s
WRITE: IOPS= 6.8M BW=27.7GiB/s
RAID1 (writing to /dev/md/test)
READ: IOPS=518K BW=2028MiB/s
WRITE: IOPS=222K BW= 912MiB/s
> > We are actually executing hundreds of VMs on our hosts. The problem
> > is that when we use RAID1 for our enterprise NVMe disks, the
> > performance degrades very much compared to using them directly; it
> > seems we have the same bottleneck as the test described above.
>
> So those hundreds VMs run on the raid1, and the raid1 is created with
> nvme disks. What's /proc/mdstat?
At the moment we do not use raid1 due to this performance issue.
Since the machines are in production, I can not change their disk
layout. If I find the opportunity, I will set up raid1 on real
disks and report the contents of /proc/mdstat.
Thanks,
Ali
next prev parent reply other threads:[~2023-06-15 17:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-15 7:54 Unacceptably Poor RAID1 Performance with Many CPU Cores Ali Gholami Rudi
2023-06-15 9:16 ` Xiao Ni
2023-06-15 17:08 ` Ali Gholami Rudi [this message]
2023-06-15 17:36 ` Ali Gholami Rudi
2023-06-16 1:53 ` Xiao Ni
2023-06-16 5:20 ` Ali Gholami Rudi
2023-06-15 14:02 ` Yu Kuai
2023-06-16 2:14 ` Xiao Ni
2023-06-16 2:34 ` Yu Kuai
2023-06-16 5:52 ` Ali Gholami Rudi
[not found] ` <20231606091224@laper.mirepesht>
2023-06-16 7:31 ` Ali Gholami Rudi
2023-06-16 7:42 ` Yu Kuai
2023-06-16 8:21 ` Ali Gholami Rudi
2023-06-16 8:34 ` Yu Kuai
2023-06-16 8:52 ` Ali Gholami Rudi
2023-06-16 9:17 ` Yu Kuai
2023-06-16 11:51 ` Ali Gholami Rudi
2023-06-16 12:27 ` Yu Kuai
2023-06-18 20:30 ` Ali Gholami Rudi
2023-06-19 1:22 ` Yu Kuai
2023-06-19 5:19 ` Ali Gholami Rudi
2023-06-19 6:53 ` Yu Kuai
2023-06-21 8:05 ` Xiao Ni
2023-06-21 8:26 ` Yu Kuai
2023-06-21 8:55 ` Xiao Ni
2023-07-01 11:17 ` Ali Gholami Rudi
2023-07-03 12:39 ` Yu Kuai
2023-07-05 7:59 ` Ali Gholami Rudi
2023-06-21 19:34 ` Wols Lists
2023-06-23 0:52 ` Xiao Ni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231506203832@laper.mirepesht \
--to=aligrudi@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
--cc=xni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).