public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: inode64 directory placement determinism
Date: Mon, 18 Aug 2014 19:02:04 -0500	[thread overview]
Message-ID: <ed7598d015f5a1d3576e6b03f3ef3116@localhost> (raw)
In-Reply-To: <20140818224853.GD26465@dastard>

On Tue, 19 Aug 2014 08:48:53 +1000, Dave Chinner <david@fromorbit.com>
wrote:
> On Mon, Aug 18, 2014 at 11:16:12AM -0500, Stan Hoeppner wrote:
>> On Mon, 18 Aug 2014 17:01:53 +1000, Dave Chinner <david@fromorbit.com>
>> wrote:
>> > On Sun, Aug 17, 2014 at 10:29:21PM -0500, Stan Hoeppner wrote:
>> >> Say I have a single 4TB disk in an md linear device.  The md device
>> >> has
>> a
>> >> filesystem on it formatted with defaults.  It has 4 AGs, 0-3.  I
have
>> >> created 4 directories.  Each should reside in a different AG, the
>> >> first
>> >> in
>> >> AG0.  Now I expand the linear device with an identical 4TB disk and
>> >> execute
>> >> xfs_growfs.  I now have 4 more AGs, 4-7.  I create 4 more
directories.
>> >> 
>> >> Will these 4 new dirs be created sequentially in AGs 4-7, or in the
>> first
>> >> 4 AGs?  Is this deterministic, or is there any chance involved?  On
>> >> the
>> > 
>> > Deterministic, assuming single threaded *file-system-wide* directory
>> > creation. Completely unpredictable under concurrent directory
>> > creations.  See xfs_ialloc_ag_select/xfs_ialloc_next_ag.
>> > 
>> > Note that the rotor used to select the next AG is set to
>> > zero at mount.
>> > 
>> > i.e. single threaded behaviour at agcount = 4:
>> > 
>> > dir number	rotor value	  destination AG
>> >  1		  0			0
>> >  2		  1			1
>> >  3		  2			2
>> >  4		  3			3
>> >  5		  0			0
>> >  6		  1			1
>> > ....
>> > 
>> > So, if you do what you suggest, and grow *after* the first 4 dirs
>> > are created, the above is what you'll get because the rotor goes
>> > back to zero on the fourth directory create. Now, with changing from
>> > 4 to 8 AGs after the first 4:
>> > 
>> > dir number	rotor value	  new inode location (AG)
>> >  1		  0			0
>> >  2		  1			1
>> >  3		  2			2
>> >  4		  3			3
>> > <grow to 8 AGs>
>> >  5		  0			0
>> >  6		  1			1
>> >  7		  2			2
>> >  8		  3			3
>> >  9		  4			4
>> >  10		  5			5
>> >  11		  6			6
>> >  13		  7			7
>> >  14		  0			0
>> > 
>> >> real system these 4TB drives are actually 48TB LUNs.  I'm after
>> >> deterministic parallel bandwidth to subsequently added RAIDs after
>> >> each
>> >> grow operation by simply writing to the proper directory.
>> > 
>> > Just create new directories and use the inode number to
>> > determine their location. If the directory is not in the correct AG,
>> > remove it and create a new one, until you have directories located
>> > in the AGs you want.
>> > 
>> > Cheers,
>> > 
>> > Dave.
>> 
>> 
>> Thanks for the info Dave.  Was hoping it would be more straightforward.

>> Modifying the app for this is out of the question.  They've spent 3+
>> years
>> developing with EXT4 and decided to try XFS at the last minute. 
Product
>> is
>> to ship in October, so optimizations I can suggest are limited.
> 
> Perhaps you could actually tell us what the requirement for
> layout/separation is, and how they are acheiving it with ext4. We
> really need a more "directed" allocation ability, but it's not clear
> exactly what requirements need to drive that.
> 
> Cheers,
> 
> Dave.

The test harness app writes to thousands of preallocated files in hundreds
of directories.  The target is ~250MB/s at the application per array, more
if achievable, writing a combination of fast and slow streams from up to
~1000 threads, to different files, circularly.  The mix of stream rates and
the files they write will depend on the end customers' needs.  Currently
they have 1 FS per array with 3 top level dirs each w/3 subdirs, 2 of these
with ~100 subdirs each, and hundreds files in each of those.  Simply doing
a concat, growing and just running with it might work fine.  The concern is
ending up with too many fast stream writers hitting AGs on a single array
which won't be able to keep up.  Currently they simply duplicate the layout
on each new filesystem they mount.  The application duplicates the same
layout on each filesystem and does its own load balancing among the group
of them.

Ideally they'd obviously like to simply add files to existing directories
after growing, but that won't achieve scalable bandwidth.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-08-19  0:02 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-18  3:29 inode64 directory placement determinism Stan Hoeppner
2014-08-18  7:01 ` Dave Chinner
2014-08-18 16:16   ` Stan Hoeppner
2014-08-18 22:48     ` Dave Chinner
2014-08-19  0:02       ` Stan Hoeppner [this message]
2014-08-24 20:14         ` stan hoeppner
2014-08-25  2:15           ` Stan Hoeppner
2014-08-25  2:19           ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed7598d015f5a1d3576e6b03f3ef3116@localhost \
    --to=stan@hardwarefreak.com \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox