From: Paul E Luse <paul.e.luse@linux.intel.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: Yiming Xu <teddyxym@outlook.com>,
song@kernel.org, linux-raid@vger.kernel.org,
paul.e.luse@intel.com, firnyee@gmail.com
Subject: Re: [RFC] md/raid5: optimize RAID5 performance.
Date: Tue, 16 Jan 2024 01:31:36 -0700 [thread overview]
Message-ID: <20240116013136.06d3d173@peluse-desk5> (raw)
In-Reply-To: <ZWQ63SpjIE4bc+pi@infradead.org>
On Sun, 26 Nov 2023 22:44:45 -0800
Christoph Hellwig <hch@infradead.org> wrote:
> Hi Shushu,
>
> the work certainly l-ooks interesting!
>
> However:
>
> > Optimized by using fine-grained locks, customized data structures,
> > and scattered address space. Achieves significant improvements in
> > both throughput and latency.
>
> this is a lot of work for a single Linux patch, we usually do that
> work pice by pice instead of complete rewrite, and for such
> signigicant changes the commit logs also tend to be a bit extensive.
>
> I'm also not quite sure what scattered address spaces are - I bet
> reading the paper (I plan to get to that) would explain it, but it
> also helps to explain the idea in the commit message.
>
> That's my high level nitpicking for now, I'll try to read the paper
> and the patch in detail and come back later.
>
>
Hi Everyone,
I went ahead and ran a series of performance tests on this patch to
help the community understand the value.Here's a summary of what have
completed and am happy to run some more to keep the patch moving.
I have not yet reviewed the code as I wanted to make sure it provided
good benefit first and it does for sure. I will be reviewing shortly.
Here is a summary of my tests:
* Kioxia CM7 drives
https://americas.kioxia.com/content/dam/kioxia/shared/business/ssd/enterprise-ssd/asset/productbrief/eSSD-CM7-V-product-brief.pdf
* Dual Socket Xeon 8368 2.4GHz 256G RAM
* Results are the average of just 2 60 second runs per data point, if
interest continues I can re-run to eliminate any potential anomalies
* I used 8 fio jobs per disk and 2 group_thread_cnt per disk so when
reading the graph, for example, 8DR5_patch_64j15gtc means an 8 Disk
RAID5 run against the patch with 64 fio jobs and group-thread_cnt set
to 16. 'base' in the name is md-next branch as of yesterday.
* Sample fio command: fio --filename=/dev/md0 --direct=1
--output=/root/remote/8DR5_patch_64j16gtc_1/randrw_131072_1.json
--rw=randrw --bs=131072 --ioengine=libaio --ramp_time=3 --runtime=60
--iodepth=1 --numjobs=64 --time_based --group_reporting
--name=131072_1_randrw --output-format=json --numa_cpu_nodes=0
Results: https://photos.app.goo.gl/Cip1rU3spbD8nvG28
-Paul
next prev parent reply other threads:[~2024-01-17 16:06 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-26 8:09 [RFC] md/raid5: optimize RAID5 performance Yiming Xu
2023-11-27 6:44 ` Christoph Hellwig
2023-11-27 12:21 ` Paul E Luse
2023-11-27 23:21 ` Song Liu
2024-01-16 8:31 ` Paul E Luse [this message]
2024-01-17 16:34 ` Paul E Luse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240116013136.06d3d173@peluse-desk5 \
--to=paul.e.luse@linux.intel.com \
--cc=firnyee@gmail.com \
--cc=hch@infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=paul.e.luse@intel.com \
--cc=song@kernel.org \
--cc=teddyxym@outlook.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).