From: Ming Zhang <mingz@ele.uri.edu>
To: Chris Wedgwood <cw@f00f.org>
Cc: Peter Grandi <pg_xfs@xfs.for.sabi.co.UK>,
Linux XFS <linux-xfs@oss.sgi.com>
Subject: Re: stable xfs
Date: Thu, 20 Jul 2006 12:38:01 -0400 [thread overview]
Message-ID: <1153413481.2768.65.camel@localhost.localdomain> (raw)
In-Reply-To: <20060720161707.GB26748@tuatara.stupidest.org>
On Thu, 2006-07-20 at 09:17 -0700, Chris Wedgwood wrote:
> On Thu, Jul 20, 2006 at 10:08:22AM -0400, Ming Zhang wrote:
>
> > we mainly handle large media files like 20-50GB. so file number is
> > not too much. but file size is large.
>
> xfs_repair usually deals with that fairly well in reality (much better
> than lots of small files anyhow)
sounds cool. yes, large # of small files are always painful.
>
> > hope i never need to run repair, but i do need to defrag from time
> > to time.
>
> if you preallocate you can avoid that (this is what i do, i
> preallocate in the replication daemon)
i could not control my application. so i still need to do defrag some
time.
>
> > hope this does not hold true for a 15x750GB SATA raid5. ;)
>
> that's ~10TB or so, my guess is that a repair there would take some
> GBs of ram
>
> it would be interesting to test it if you had the time
yes. i should find out. hope to force a repair? unplug my power cord? ;)
>
> there is a 'formular' for working out how much ram is needed roughly
> (steve lord posted it a long time ago, hopefully someone can find that
> and repost is)
>
> > say XFS can make use of parallel storage by using multiple
> > allocation groups. but XFS need to be built over one block
> > device. so if i have 4 smaller raid, i have to use LVM to glue them
> > before i create XFS over it right? but then u said XFS over LVM or N
> > MD is not good?
>
> with recent kernels it shouldn't be a problem, the recursive nature of
> the block layer changed so you no longer blow up as badly as people
> did in the past (also, XFS tends to use less stack these days)
sounds cool.
next prev parent reply other threads:[~2006-07-20 16:38 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-07-17 15:30 stable xfs Ming Zhang
2006-07-17 16:20 ` Peter Grandi
2006-07-18 22:36 ` Ming Zhang
2006-07-18 23:14 ` Peter Grandi
2006-07-19 1:20 ` Ming Zhang
2006-07-19 5:56 ` Chris Wedgwood
2006-07-19 10:53 ` Peter Grandi
2006-07-19 14:45 ` Ming Zhang
2006-07-22 17:13 ` Peter Grandi
2006-07-20 6:12 ` Chris Wedgwood
2006-07-22 17:31 ` Peter Grandi
2006-07-19 14:10 ` Ming Zhang
2006-07-19 10:24 ` Peter Grandi
2006-07-19 13:11 ` Ming Zhang
2006-07-20 6:15 ` Chris Wedgwood
2006-07-20 14:08 ` Ming Zhang
2006-07-20 16:17 ` Chris Wedgwood
2006-07-20 16:38 ` Ming Zhang [this message]
2006-07-20 19:04 ` Chris Wedgwood
2006-07-21 0:19 ` Ming Zhang
2006-07-21 3:26 ` Chris Wedgwood
2006-07-21 13:10 ` Ming Zhang
2006-07-21 16:07 ` Chris Wedgwood
2006-07-21 17:00 ` Ming Zhang
2006-07-21 18:07 ` Chris Wedgwood
2006-07-24 1:14 ` Ming Zhang
2006-07-22 18:09 ` Peter Grandi
2006-07-22 17:47 ` Peter Grandi
2006-07-22 15:37 ` Peter Grandi
2006-07-18 23:54 ` Nathan Scott
2006-07-19 1:15 ` Ming Zhang
2006-07-19 7:40 ` Martin Steigerwald
2006-07-19 14:11 ` Ming Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1153413481.2768.65.camel@localhost.localdomain \
--to=mingz@ele.uri.edu \
--cc=cw@f00f.org \
--cc=linux-xfs@oss.sgi.com \
--cc=pg_xfs@xfs.for.sabi.co.UK \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox