From: David Brown <david@westcontrol.com>
Cc: NeilBrown <neilb@suse.de>, Chris Pearson <kermit4@gmail.com>,
linux-raid@vger.kernel.org
Subject: Re: with raid-6 any writes access all disks
Date: Thu, 27 Oct 2011 15:05:02 +0200 [thread overview]
Message-ID: <4EA956FE.3070504@westcontrol.com> (raw)
In-Reply-To: <4EA94D1F.8080507@zytor.com>
On 27/10/2011 14:22, H. Peter Anvin wrote:
> On 10/27/2011 11:29 AM, David Brown wrote:
>>
>> Q_new can be simplified to:
>>
>> Q_new = Q_old + 2^(i-1) . (Di_old + Di_new)
>>
>> "Multiplying" by 2 is relatively speaking quite time-consuming in
>> GF(2^8). "Multiplying" by 2^(i-1) can be done by either pre-calculating
>> a multiply table, or using a loop to repeatedly multiply by 2.
>>
>
> Multiplying by 2 is cheap. Multiplying by an arbitrary number is more
> expensive, in the absence of tricks that can be played on specific
> hardware implementations (e.g. SSSE3) as mentioned in my paper.
Of course, it all depends on the comparisons - multiplying by 2 is
fairly cheap, but still more work than the simple "add" (xor) used in
RAID5. But I agree that the looping for arbitrary powers of 2 is much
more costly.
Perhaps it makes sense to have functions dedicated to multiplying
particular powers-of-two (over a full block). The loop overhead will
dominate for small powers, so these could be split off into individual
implementations. For larger powers, a loop would be used. And for
still larger powers, a lookup table would be faster. I don't know where
the boundaries go for these.
>
>>
>> I don't know what compiler versions are typically used to compile the
>> kernel, but from gcc 4.4 onwards there is a "target" function attribute
>> that can be used to change the target cpu for a function. What this
>> means is that the C code can be written once, and multiple versions of
>> it can be compiled with features such as "sse", "see4", "altivec",
>> "neon", etc. And newer versions of the compiler are getting better at
>> using these cpu features automatically. It should therefore be
>> practical to get high-speed code suited to the particular cpu you are
>> running on, without needing hand-written SSE/Altivec assembly code. That
>> would save a lot of time and effort on writing, testing and maintenance.
>>
>
> Nice in theory; doesn't work in practice in my experience.
>
Where does it go wrong? Is it the automatic vectorisation with SSE,
etc., that is still too limited with gcc? I have done very little work
with x86/amd64 assembly (most of my experience is with microcontrollers
rather than "big" processors), so I haven't tried looking at gcc's SSE
code and comparing it to hand-optimised code.
mvh.,
David
next prev parent reply other threads:[~2011-10-27 13:05 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-26 21:01 with raid-6 any writes access all disks Chris Pearson
2011-10-26 21:23 ` Peter W. Morreale
2011-10-26 21:23 ` NeilBrown
2011-10-26 22:30 ` H. Peter Anvin
2011-10-27 9:29 ` David Brown
2011-10-27 12:22 ` H. Peter Anvin
2011-10-27 13:05 ` David Brown [this message]
2011-11-01 22:22 ` H. Peter Anvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4EA956FE.3070504@westcontrol.com \
--to=david@westcontrol.com \
--cc=kermit4@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).