From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: inode64 directory placement determinism
Date: Mon, 18 Aug 2014 11:16:12 -0500 [thread overview]
Message-ID: <bc34a576d2b3e8c431633574deaa37cc@localhost> (raw)
In-Reply-To: <20140818070153.GL20518@dastard>
On Mon, 18 Aug 2014 17:01:53 +1000, Dave Chinner <david@fromorbit.com>
wrote:
> On Sun, Aug 17, 2014 at 10:29:21PM -0500, Stan Hoeppner wrote:
>> Say I have a single 4TB disk in an md linear device. The md device has
a
>> filesystem on it formatted with defaults. It has 4 AGs, 0-3. I have
>> created 4 directories. Each should reside in a different AG, the first
>> in
>> AG0. Now I expand the linear device with an identical 4TB disk and
>> execute
>> xfs_growfs. I now have 4 more AGs, 4-7. I create 4 more directories.
>>
>> Will these 4 new dirs be created sequentially in AGs 4-7, or in the
first
>> 4 AGs? Is this deterministic, or is there any chance involved? On the
>
> Deterministic, assuming single threaded *file-system-wide* directory
> creation. Completely unpredictable under concurrent directory
> creations. See xfs_ialloc_ag_select/xfs_ialloc_next_ag.
>
> Note that the rotor used to select the next AG is set to
> zero at mount.
>
> i.e. single threaded behaviour at agcount = 4:
>
> dir number rotor value destination AG
> 1 0 0
> 2 1 1
> 3 2 2
> 4 3 3
> 5 0 0
> 6 1 1
> ....
>
> So, if you do what you suggest, and grow *after* the first 4 dirs
> are created, the above is what you'll get because the rotor goes
> back to zero on the fourth directory create. Now, with changing from
> 4 to 8 AGs after the first 4:
>
> dir number rotor value new inode location (AG)
> 1 0 0
> 2 1 1
> 3 2 2
> 4 3 3
> <grow to 8 AGs>
> 5 0 0
> 6 1 1
> 7 2 2
> 8 3 3
> 9 4 4
> 10 5 5
> 11 6 6
> 13 7 7
> 14 0 0
>
>> real system these 4TB drives are actually 48TB LUNs. I'm after
>> deterministic parallel bandwidth to subsequently added RAIDs after each
>> grow operation by simply writing to the proper directory.
>
> Just create new directories and use the inode number to
> determine their location. If the directory is not in the correct AG,
> remove it and create a new one, until you have directories located
> in the AGs you want.
>
> Cheers,
>
> Dave.
Thanks for the info Dave. Was hoping it would be more straightforward.
Modifying the app for this is out of the question. They've spent 3+ years
developing with EXT4 and decided to try XFS at the last minute. Product is
to ship in October, so optimizations I can suggest are limited.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2014-08-18 16:16 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-18 3:29 inode64 directory placement determinism Stan Hoeppner
2014-08-18 7:01 ` Dave Chinner
2014-08-18 16:16 ` Stan Hoeppner [this message]
2014-08-18 22:48 ` Dave Chinner
2014-08-19 0:02 ` Stan Hoeppner
2014-08-24 20:14 ` stan hoeppner
2014-08-25 2:15 ` Stan Hoeppner
2014-08-25 2:19 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bc34a576d2b3e8c431633574deaa37cc@localhost \
--to=stan@hardwarefreak.com \
--cc=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox