From: Jody McIntyre <scjody@sun.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-raid@vger.kernel.org, neilb@suse.de
Subject: Re: [PATCH] md: Track raid5/6 statistics
Date: Wed, 06 May 2009 16:05:03 -0400 [thread overview]
Message-ID: <20090506200502.GK25233@clouds> (raw)
In-Reply-To: <e9c3a7c20903141007h5fea439co70e4ea9ea4a10ec1@mail.gmail.com>
Hi Dan,
On Sat, Mar 14, 2009 at 10:07:49AM -0700, Dan Williams wrote:
> I am curious, can you say a bit more about the performance problems
> you solved with this data? Is there a corresponding userspace tool
> that interprets these numbers?
With the original patch there was no need for a tool - statistics were
in /proc/mdstat and were fairly easy to understand. The patch I
recently submitted would need a small tool, but one has not been
written.
I've looked into how we've used this data in the past, and while our
support team often requests /proc/mdstat from customers experiencing
RAID performance problems, they rarely receive it. The original
statistics patch (which has been shipping with Lustre for about 3 years)
seems to have been useful for 2 things:
1. Analyzing RAID IO patterns when developing our RAID performance
improvements (which seem to be completely obsolete now thanks to the
more extensive improvements you and Neil have done, so I won't be
submitting them.) Of course, this is now a good reason to merge the
patch - if anyone (including us) wants to do similar studies, they can
develop their own internal patch.
2. The out_of_stripes tracking is useful - we've found several cases
where stripe_cache_size was set too low and performance suffered as a
result. Monitoring stripe_cache_active during IO is difficult so it's
far better to have a counter like this.
So if we can solve the second problem somehow - maybe just introduce a
read-only counter under /sys/block/md*/md/out_of_stripes - the need for
the rest of the patch goes away IMO.
> [...]
> So, my original suggestion/question should have been why not extend
> blktrace to understand these incremental MD events?
Regarding blktrace specifically, it's really geared towards developers.
I played with it a bit and it looks like it might be useful to me at
some point, but I wouldn't expect a customer to use it. It would need a
much better frontend tool and a more supported kernel interface than
debugfs. But as I said, our customers aren't using our existing
/proc/mdstat information very much anyway so I don't think this problem
needs to be solved.
Cheers,
Jody
> Regards,
> Dan
next prev parent reply other threads:[~2009-05-06 20:05 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-12 20:57 [PATCH] md: Track raid5/6 statistics Jody McIntyre
2009-03-14 17:07 ` Dan Williams
2009-05-06 20:05 ` Jody McIntyre [this message]
2009-05-07 16:30 ` Dan Williams
2009-05-11 13:36 ` Jody McIntyre
2009-05-13 13:10 ` Bill Davidsen
2009-10-02 17:01 ` Jody McIntyre
2009-10-02 17:51 ` Bill Davidsen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090506200502.GK25233@clouds \
--to=scjody@sun.com \
--cc=dan.j.williams@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).