public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcus Pereira <marcus@task.com.br>
To: linux-xfs@oss.sgi.com
Subject: Re: mkfs.xfs error creating large agcount an raid
Date: Sun, 26 Jun 2011 02:53:43 -0300	[thread overview]
Message-ID: <4E06C967.2060107@task.com.br> (raw)
In-Reply-To: <4E0694CC.8050003@hardwarefreak.com>

Em 25-06-2011 23:09, Stan Hoeppner escreveu:
> On 6/25/2011 2:49 PM, Marcus Pereira wrote:
>> I have an issue when creating xfs volume using large agcounts on raid
>> volumes.
> Yes, you do have an issue, but not the one you think.
Ok, but seems something that should be corrected. Isn't that?

>> /dev/md0 is a 4 disks raid 0 array:
>>
>> ----------------------------------------
>> # mkfs.xfs -V
>> mkfs.xfs version 3.1.4
>>
>> # mkfs.xfs -d agcount=1872 -b size=4096 /dev/md0 -f
> mkfs.xfs queries mdraid for its parameters and creates close to the
> optimal number of AGs, sets the stripe width, etc, all automatically.
> The default number of AGs for striped mdraid devices is 16 IIRC, and
> even that is probably a tad too high for a 4 spindle stripe.  Four or
> eight AGs would probably be better here, depending on your workload,
> which you did not state.  Please state your target workload.
The system is a heavy loaded email server.
> At 1872 you have 117 times the number of default AGs.  The two main
> downsides to doing this are:
The default agcount was 32 at this system.
> 1. Abysmal performance due to excessive head seeking on an epic scale
> 2. Premature drive failure due to head actuator failure
There is already insane head seeking at this server, hundreds of 
simultaneous users reading their mailboxes. In fact I was trying to 
reduce the head seeking with larger agcounts.

> Now, the above assumes your "4 disks" are mechanical drives.  If these
> are actually SSDs then the hardware won't suffer failures, but
> performance will likely be far less than optimal.
The 4 disks are mechanical, in fact each of them are 2 SCSI HD raid 1 
hardware raid 0 array but the OS sees it as a single device.
So its a raid 10 with hardware raid 1 and software raid 0.

> Why are you attempting to create an insane number of allocation groups?
>   What benefit do you expect to gain from doing so?
>
> Regardless of your answer, the correct answer is that such high AG
> counts only have downsides, and zero upside.
It is still a test to find an optimal agcount, there are several of this 
servers and each of them would be with a different agcount. I was trying 
to build an even larger agcount something like 20000 to 30000. :-)
The goal is to try to keep less or even 1 mailboxes per AG so more 
sequential reading at each mailbox access and less random seek at the 
volume. I dont know if it was going to work like I was thinking.
I got this idea at this post and was giving it a try: 
http://www.techforce.com.br/news/linux_blog/lvm_raid_xfs_ext3_tuning_for_small_files_parallel_i_o_on_debian

-- 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-06-26  5:53 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-25 19:49 mkfs.xfs error creating large agcount an raid Marcus Pereira
2011-06-26  2:09 ` Stan Hoeppner
2011-06-26  5:53   ` Marcus Pereira [this message]
2011-06-26 21:26     ` Stan Hoeppner
2011-06-26 23:29       ` Stan Hoeppner
2011-06-26 23:59     ` Dave Chinner
2011-06-27  3:33       ` Stan Hoeppner
2011-06-27  4:14         ` Marcus Pereira
2011-06-27  8:55           ` Stan Hoeppner
2011-06-27 13:04             ` Paul Anderson
2011-06-27 15:10               ` Eric Sandeen
2011-06-27 15:27                 ` Paul Anderson
2011-06-27 15:37                   ` Eric Sandeen
2011-06-27 20:55                   ` Stan Hoeppner
2011-06-28  1:22                   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E06C967.2060107@task.com.br \
    --to=marcus@task.com.br \
    --cc=linux-xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox