From: Bill Davidsen <davidsen@tmr.com>
To: Hubert Verstraete <hubskml@free.fr>
Cc: linux-raid@vger.kernel.org
Subject: Re: MDP major registration
Date: Wed, 26 Mar 2008 15:18:04 -0400 [thread overview]
Message-ID: <47EAA16C.5080407@tmr.com> (raw)
In-Reply-To: <47EA8CF4.7080201@free.fr>
Hubert Verstraete wrote:
> Bill Davidsen wrote:
>> Luca Berra wrote:
>>> On Tue, Mar 25, 2008 at 05:57:06PM +0100, Hubert Verstraete wrote:
>>>> Neil Brown wrote:
>>>>> On Thursday March 13, hubskml@free.fr wrote:
>>>>>> Neil,
>>>>>>
>>>>>> What is the status of the major for the partitionable arrays ?
>>>>>
>>>>> automatically determined at runtime.
>>>>>
>>>>>> I see that it is 254, which is in the experimental section,
>>>>>> according to the official Linux device list
>>>>>> (http://www.lanana.org/docs/device-list/).
>>>>>> Will there be an official registration ?
>>>>>
>>>>> No. Is there any need?
>>>>
>>>> I got this question in mind when I saw that mkfs.xfs source code
>>>> was referring to the MD major to tune its parameters on an MD
>>>> device, while it ignores MDP devices.
>>>> If there were reasons to register MD, wouldn't they apply to MDP too ?
>>>
>>> i don't think so:
>>> bluca@percy ~ $ grep mdp /proc/devices
>>> 253 mdp
>>
>> Why is it important to have XFS tune its parameters for md and not
>> for mdp? I don't understand your conclusion here, is tuning not
>> needed for mdp, or so meaningless that it doesn't matter, or that XFS
>> code reads /proc/devices, or ??? I note that device-mapper also has a
>> dynamic major, what does XFS make of that?
>
> It reads from /proc/devices.
>
>> I don't know how much difference tuning makes, but if it's worth
>> doing at all, it should be done for mdp as well, I would think.
>
> Same thought. I wrote the patch for mkfs.xfs but did not publish it
> for two reasons:
> 1) MD is registered but not MDP. Now I understand, it's not a problem,
> we just need to read /proc/devices as device-mapper does.
> 2) Tuning XFS for MDP can be achieved through the mkfs.xfs options.
> With a few lines in shell, my XFS on MDP now has the same performance
> as XFS on MD.
Hopefully the following patch will be picked up by vendors. Now from the
linux-kernel list there have been two recent changes in similar areas,
the ability of loop to support partitionable devices, and the ability to
have nbd handle partitionable devices. Are XFS users so lucky that this
change would also improve things for those media?
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
prev parent reply other threads:[~2008-03-26 19:18 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-13 10:46 MDP major registration Hubert Verstraete
2008-03-25 5:37 ` Neil Brown
2008-03-25 16:57 ` Hubert Verstraete
2008-03-26 6:52 ` Luca Berra
2008-03-26 15:54 ` Bill Davidsen
2008-03-26 17:50 ` Hubert Verstraete
2008-03-26 18:45 ` [PATCH] XFS tuning on software RAID5 partitionable array; was: " Hubert Verstraete
2008-03-26 19:18 ` Bill Davidsen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47EAA16C.5080407@tmr.com \
--to=davidsen@tmr.com \
--cc=hubskml@free.fr \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).