From: Christoph Hellwig <hch@lst.de>
To: Ard Biesheuvel <ardb@kernel.org>
Cc: Demian Shulhan <demyansh@gmail.com>,
Mark Rutland <mark.rutland@arm.com>,
Christoph Hellwig <hch@lst.de>, Song Liu <song@kernel.org>,
Yu Kuai <yukuai@fnnas.com>, Will Deacon <will@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Mark Brown <broonie@kernel.org>,
linux-arm-kernel@lists.infradead.org, robin.murphy@arm.com,
Li Nan <linan122@huawei.com>,
linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] raid6: arm64: add SVE optimized implementation for syndrome generation
Date: Tue, 31 Mar 2026 08:36:59 +0200 [thread overview]
Message-ID: <20260331063659.GA2061@lst.de> (raw)
In-Reply-To: <9a12e043-8200-4650-bfe2-cbece57a4f87@app.fastmail.com>
On Mon, Mar 30, 2026 at 06:39:49PM +0200, Ard Biesheuvel wrote:
> I think the results are impressive, but I'd like to better understand
> its implications on a real-world scenario. Is this code only a
> bottleneck when rebuilding an array?
The syndrome generation is run every time you write data to a RAID6
array, and if you do partial stripe writes it (or rather the XOR
variant) is run twice. So this is the most performance critical
path for writing to RAID6.
Rebuild usually runs totally different code, but can end up here as well
when both parity disks are lost.
> > Furthermore, as Christoph suggested, I tested scalability on wider
> > arrays since the default kernel benchmark is hardcoded to 8 disks,
> > which doesn't give the unrolled SVE loop enough data to shine. On a
> > 16-disk array, svex4 hits 15.1 GB/s compared to 8.0 GB/s for neonx4.
> > On a 24-disk array, while neonx4 chokes and drops to 7.8 GB/s, svex4
> > maintains a stable 15.0 GB/s — effectively doubling the throughput.
>
> Does this mean the kernel benchmark is no longer fit for purpose? If
> it cannot distinguish between implementations that differ in performance
> by a factor of 2, I don't think we can rely on it to pick the optimal one.
It is not good, and we should either fix it or run more than one.
The current setup is not really representative of real-life array.
It also leads to wrong selections on x86, but only at the which unroll
level to pick level, and only for minor differences so far. I plan
to add this to the next version of the raid6 lib patches.
next prev parent reply other threads:[~2026-03-31 6:37 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260318150245.3080719-1-demyansh@gmail.com>
2026-03-24 7:45 ` [PATCH v2] raid6: arm64: add SVE optimized implementation for syndrome generation Christoph Hellwig
2026-03-24 8:00 ` Ard Biesheuvel
2026-03-24 10:04 ` Mark Rutland
2026-03-29 13:01 ` Demian Shulhan
2026-03-30 5:30 ` Christoph Hellwig
2026-03-30 16:39 ` Ard Biesheuvel
2026-03-31 6:36 ` Christoph Hellwig [this message]
2026-03-31 13:18 ` Demian Shulhan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260331063659.GA2061@lst.de \
--to=hch@lst.de \
--cc=ardb@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=demyansh@gmail.com \
--cc=linan122@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=robin.murphy@arm.com \
--cc=song@kernel.org \
--cc=will@kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox