From: Asdo <asdo@shiftmail.org>
To: Holger Kiehl <Holger.Kiehl@dwd.de>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: [PATCH 0/3] md fixes for 2.6.32-rc
Date: Sun, 11 Oct 2009 14:16:57 +0200 [thread overview]
Message-ID: <4AD1CCB9.3000301@shiftmail.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0910080840420.22913@diagnostix.dwd.de>
Holger Kiehl wrote:
> Its a dual cpu mainboard with two Xeon X5460 and 32 GiB Ram.
>
Nice machine...
Earlier intels though, with kinda slow RAM access (who knows if this has
something to do...)
A few more observations:
>> also: stripe_cache_size current setting for your RAID
>> Raid level, number of disks, chunk size, filesystem...
> Raid level is Raid6 over 8 disks (actually 16, which are made up of 8
> HW Raid1 disks) and chunksize is 2048k. Here the output from /proc/mdstat
Ok the filesystem appears to be aligned then (I am assuming you are not
using LVM, otherwise pls tell because there can be other tweaks)
I don't really know this multicore implementation, however here is one
thing you could try:
The default stripe_cache_size might be too small for a multicore
implementation, considering that there might be many more in-flight
operations than with single-core (and even with single core it is
beneficial to raise it).
What is the current value? (cat /sys/block/md4/md/stripe_cache_size)
You can try:
echo 32768 > /sys/block/md4/md/stripe_cache_size
You can also try to raise /sys/block/md4/md/nr_requests echoing a higher
value in it, but this operation hangs up the kernel immediately on our
machine with Ubuntu kernel 2.6.31 . This is actually a bug: Neil / Dan
are you aware of this bug? Is it fixed in 2.6.32?
Secondly, it's surprising that it is slower than single core. How busy
are the 8 cores you have, running on the MD threads (try htop) ? If you
see all 8 cores maxed, it's one kind of implementation problem, if you
see just 1-2 cores maxed, it's another kind...
Maybe stack traces can be used to see where the bottleneck is... I am
not a profiling expert actually, but you might try this
cat /proc/12345/stack
replace 12345 with the kernel threads for md: you can probably see the
PIDs with ps auxH | less or with htop (K option enabled)
catting the stack many times should see where they are stuck for most of
the time
next prev parent reply other threads:[~2009-10-11 12:16 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-02 1:18 [PATCH 0/3] md fixes for 2.6.32-rc Dan Williams
2009-10-02 1:18 ` [PATCH 1/3] md/raid5: initialize conf->device_lock earlier Dan Williams
2009-10-02 1:18 ` [PATCH 2/3] Revert "md/raid456: distribute raid processing over multiple cores" Dan Williams
2009-10-02 1:18 ` [PATCH 3/3] Allow sysfs_notify_dirent to be called from interrupt context Dan Williams
2009-10-02 4:13 ` [PATCH 0/3] md fixes for 2.6.32-rc Neil Brown
2009-10-03 15:54 ` Dan Williams
2009-10-07 0:36 ` Dan Williams
2009-10-07 4:34 ` Neil Brown
2009-10-07 12:05 ` Holger Kiehl
2009-10-07 18:33 ` Asdo
2009-10-08 8:50 ` Holger Kiehl
2009-10-11 12:16 ` Asdo [this message]
2009-10-11 13:17 ` Asdo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AD1CCB9.3000301@shiftmail.org \
--to=asdo@shiftmail.org \
--cc=Holger.Kiehl@dwd.de \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).