public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@sandeen.net>
To: Paul Anderson <pha@umich.edu>
Cc: linux-xfs@oss.sgi.com, Marcus Pereira <marcus@task.com.br>,
	Stan Hoeppner <stan@hardwarefreak.com>
Subject: Re: mkfs.xfs error creating large agcount an raid
Date: Mon, 27 Jun 2011 10:10:06 -0500	[thread overview]
Message-ID: <4E089D4E.1060503@sandeen.net> (raw)
In-Reply-To: <BANLkTimJm5Fe1LvD1AQYZC5QCDs0gXJpFA@mail.gmail.com>

On 6/27/11 8:04 AM, Paul Anderson wrote:
> One thing this thread indicates is the need for a warning in mkfs.xfs
> - according to several developers, there is, I think, linear increase
> in allocation time to number of allocation groups.
> 
> It would be helpful for the end user to simply issue a warning stating
> this when the AG count seems high with a brief explanation as to why
> it seems high.  I would allow it, but print the warning.  Even a
> simple linear check like agroups>500 should suffice for "a while".

I disagree.

There are all sorts of ways a user can shoot themselves in the foot with
unix commands.  Detecting and warning about all of them is a fool's errand.

======================================
= Warning!  mkfs.xfs detected insane =
=   option specification.  Cancel?   =
=                                    =
=      [   OK   ]     [ Cancel ]     =
======================================

-Eric

> Paul
> 
> On Mon, Jun 27, 2011 at 4:55 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 6/26/2011 11:14 PM, Marcus Pereira wrote:
>>> Em 27-06-2011 00:33, Stan Hoeppner escreveu:
>>>>
>>>> I recommend 3 changes, one of which I previously mentioned:
>>>>
>>>> 1.  Use 8 mirror pairs instead of 4
>>>> 2.  Don't use striping.  Make an mdraid --linear device of the 8 mirrors
>>>> 3.  Format with '-d agcount=32' which will give you 4 AGs per spindle
>>>>
>>>> Test this configuration and post your results.
>>>
>>> I am thanks for all advices. I will make the tests and post, may take
>>> some time.
>>>
>>> About all other messages. My system may not be a Ferrari but its not a
>>> Volks. I certainly do not have that many HDs in fiber channel, but the
>>> sever is a dual core Xeon 6 cores with HT. Linux sees a total of 24
>>> cores, total RAM is 24GB. The HDs are all SAS 15Krpm and the system runs
>>> on SSD. They are dedicated to handle the maildir files and I have
>>> several of those servers running nicely.
>>> But I don’t want to make the thread about my system larger.
>>
>> So you do or don't have the excessive head seek problem you previously
>> mentioned?  If not then use the mkfs.xfs defaults.
>>
>>> Yes, I don’t know much about XFS and Allocation groups, thanks for you
>>> all to help me a bit.
>>
>> You're welcome.  Google should turn up a decent amount of information
>> about XFS allocation groups if you're interested in further reading.
>>
>>> At the end the reason why I opened the thread it the error and the
>>> developers should take some care about that.
>>
>>> Ok, no reason to use that many agcount but giving a "mkfs.xfs: pwrite64
>>> failed: No space left on device" error for me stills seems a bug.
>>
>> The definition of a software bug stipulates incorrect or unexpected
>> program behavior.  Error messages aren't bugs unless the wrong error
>> message is returned for a given fault condition, or no error is returned
>> when one should be.
>>
>> Are you stipulating that the above isn't the correct error message for
>> the fault condition?  Or do you simply not understand the error message?
>>  If the latter, maybe you should simply ask what that error means before
>> saying the error message is a bug. :)
>>
>> --
>> Stan
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-06-27 15:10 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-25 19:49 mkfs.xfs error creating large agcount an raid Marcus Pereira
2011-06-26  2:09 ` Stan Hoeppner
2011-06-26  5:53   ` Marcus Pereira
2011-06-26 21:26     ` Stan Hoeppner
2011-06-26 23:29       ` Stan Hoeppner
2011-06-26 23:59     ` Dave Chinner
2011-06-27  3:33       ` Stan Hoeppner
2011-06-27  4:14         ` Marcus Pereira
2011-06-27  8:55           ` Stan Hoeppner
2011-06-27 13:04             ` Paul Anderson
2011-06-27 15:10               ` Eric Sandeen [this message]
2011-06-27 15:27                 ` Paul Anderson
2011-06-27 15:37                   ` Eric Sandeen
2011-06-27 20:55                   ` Stan Hoeppner
2011-06-28  1:22                   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E089D4E.1060503@sandeen.net \
    --to=sandeen@sandeen.net \
    --cc=linux-xfs@oss.sgi.com \
    --cc=marcus@task.com.br \
    --cc=pha@umich.edu \
    --cc=stan@hardwarefreak.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox