linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yu Kuai <yukuai1@huaweicloud.com>
To: Ali Gholami Rudi <aligrudi@gmail.com>, linux-raid@vger.kernel.org
Cc: song@kernel.org, "yukuai (C)" <yukuai3@huawei.com>
Subject: Re: Unacceptably Poor RAID1 Performance with Many CPU Cores
Date: Thu, 15 Jun 2023 22:02:59 +0800	[thread overview]
Message-ID: <82d2e7c4-1029-ec7b-a8c5-5a6deebfae31@huaweicloud.com> (raw)
In-Reply-To: <20231506112411@laper.mirepesht>

Hi,

在 2023/06/15 15:54, Ali Gholami Rudi 写道:
> Perf output:
> 
> Samples: 1M of event 'cycles', Event count (approx.): 1158425235997
>    Children      Self  Command  Shared Object           Symbol
> +   97.98%     0.01%  fio      fio                     [.] fio_libaio_commit
> +   97.95%     0.01%  fio      libaio.so.1.0.1         [.] io_submit
> +   97.85%     0.01%  fio      [kernel.kallsyms]       [k] __x64_sys_io_submit
> -   97.82%     0.01%  fio      [kernel.kallsyms]       [k] io_submit_one
>     - 97.81% io_submit_one
>        - 54.62% aio_write
>           - 54.60% blkdev_write_iter
>              - 36.30% blk_finish_plug
>                 - flush_plug_callbacks
>                    - 36.29% raid1_unplug
>                       - flush_bio_list
>                          - 18.44% submit_bio_noacct
>                             - 18.40% brd_submit_bio
>                                - 18.13% raid1_end_write_request
>                                   - 17.94% raid_end_bio_io
>                                      - 17.82% __wake_up_common_lock
>                                         + 17.79% _raw_spin_lock_irqsave
>                          - 17.79% __wake_up_common_lock
>                             + 17.76% _raw_spin_lock_irqsave
>              + 18.29% __generic_file_write_iter
>        - 43.12% aio_read
>           - 43.07% blkdev_read_iter
>              - generic_file_read_iter
>                 - 43.04% blkdev_direct_IO
>                    - 42.95% submit_bio_noacct
>                       - 42.23% brd_submit_bio
>                          - 41.91% raid1_end_read_request
>                             - 41.70% raid_end_bio_io
>                                - 41.43% __wake_up_common_lock
>                                   + 41.36% _raw_spin_lock_irqsave
>                       - 0.68% md_submit_bio
>                            0.61% md_handle_request
> +   94.90%     0.00%  fio      [kernel.kallsyms]       [k] __wake_up_common_lock
> +   94.86%     0.22%  fio      [kernel.kallsyms]       [k] _raw_spin_lock_irqsave
> +   94.64%    94.64%  fio      [kernel.kallsyms]       [k] native_queued_spin_lock_slowpath
> +   79.63%     0.02%  fio      [kernel.kallsyms]       [k] submit_bio_noacct

This looks familiar... Perhaps can you try to test with raid10 with
latest mainline kernel? I used to optimize spin_lock for raid10, and I
don't do this for raid1 yet... I can try to do the same thing for raid1
if it's valuable.

> 
> 
> FIO configuration file:
> 
> [global]
> name=random reads and writes
> ioengine=libaio
> direct=1
> readwrite=randrw
> rwmixread=70
> iodepth=64
> buffered=0
> #filename=/dev/ram0
> filename=/dev/dm/test
> size=1G
> runtime=30
> time_based
> randrepeat=0
> norandommap
> refill_buffers
> ramp_time=10
> bs=4k
> numjobs=400

400 is too aggressive, I think spin_lock from fast path is probably
causing the problem, same as I met before for raid10...

Thanks,
Kuai

> group_reporting=1
> [job1]
> 
> .
> 


  parent reply	other threads:[~2023-06-15 14:03 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-15  7:54 Unacceptably Poor RAID1 Performance with Many CPU Cores Ali Gholami Rudi
2023-06-15  9:16 ` Xiao Ni
2023-06-15 17:08   ` Ali Gholami Rudi
2023-06-15 17:36     ` Ali Gholami Rudi
2023-06-16  1:53       ` Xiao Ni
2023-06-16  5:20         ` Ali Gholami Rudi
2023-06-15 14:02 ` Yu Kuai [this message]
2023-06-16  2:14   ` Xiao Ni
2023-06-16  2:34     ` Yu Kuai
2023-06-16  5:52     ` Ali Gholami Rudi
     [not found]     ` <20231606091224@laper.mirepesht>
2023-06-16  7:31       ` Ali Gholami Rudi
2023-06-16  7:42         ` Yu Kuai
2023-06-16  8:21           ` Ali Gholami Rudi
2023-06-16  8:34             ` Yu Kuai
2023-06-16  8:52               ` Ali Gholami Rudi
2023-06-16  9:17                 ` Yu Kuai
2023-06-16 11:51                 ` Ali Gholami Rudi
2023-06-16 12:27                   ` Yu Kuai
2023-06-18 20:30                     ` Ali Gholami Rudi
2023-06-19  1:22                       ` Yu Kuai
2023-06-19  5:19                       ` Ali Gholami Rudi
2023-06-19  6:53                         ` Yu Kuai
2023-06-21  8:05                     ` Xiao Ni
2023-06-21  8:26                       ` Yu Kuai
2023-06-21  8:55                         ` Xiao Ni
2023-07-01 11:17                         ` Ali Gholami Rudi
2023-07-03 12:39                           ` Yu Kuai
2023-07-05  7:59                             ` Ali Gholami Rudi
2023-06-21 19:34                       ` Wols Lists
2023-06-23  0:52                         ` Xiao Ni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82d2e7c4-1029-ec7b-a8c5-5a6deebfae31@huaweicloud.com \
    --to=yukuai1@huaweicloud.com \
    --cc=aligrudi@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=song@kernel.org \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).