From: Michael Monnerie <michael.monnerie@is.it-management.at>
To: xfs@oss.sgi.com, stan@hardwarefreak.com,
Peter Grandi <pg_xf2@xf2.for.sabi.co.uk>
Subject: Re: howto keep xfs directory searches fast for a long time
Date: Mon, 13 Aug 2012 18:44:34 +0200 [thread overview]
Message-ID: <2561870.uQFC4XLYQm@saturn> (raw)
In-Reply-To: <5028057F.3090007@hardwarefreak.com>
[-- Attachment #1.1: Type: text/plain, Size: 2253 bytes --]
First, thanks to both of you.
Am Sonntag, 12. August 2012, 14:35:27 schrieb Stan Hoeppner:
> So the problem here is max vmdk size? Just use an RDM.
That would have been an option before someone created the VDMK space
over the full RAID ;-)
> Peter Grandi:
> Ah the usual goal of a single large storage pool for cheap.
I don't need O_PONIES or 5.000 IOPS. I've just been trying to figure out
if there's anything I can do to "optimize" a given VM and storage space
via xfs formatting. This I guess is what 95% of admins worldwide have to
do these days: Generic, virtualized environments with a given storage,
and customer wants X. Where X is sometimes a DB, sometimes a file store,
sometimes archive store. And customer expects endless IOPS, sub-zero
delay, and endless disk space. I tend to destroy their ponies quickly,
but that doesn't mean you can't try to keep systems quick.
That particular VM is not important, but I want to keep user
satisfaction at a quality level. About 10 times a week someone connects
to that machine, searches a file and downloads it over the Internet. So
download or read speed is of no value. But access/find times are.
I guess the best I can do is run du/find every morning to pre-fill the
inode caches on that VM, so when someone connects the search runs fast.
The current VM shows this:
# df -i /disks/big1/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/sp1--sha 1717934464 1255882 1716678582 1% /disks/big1
# df /disks/big1/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/sp1--sha 8587585536 6004421384 2583164152 70% /disks/big1
So 6TB data in 1.3 mio inodes. The VM caches that easily, seems that's
the only real thing to optimize against.
http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
CFQ seems bad, but there's no documented way out of that. I've edited
that, and added a short vm.vfs_cache_pressure description. Please
someone recheck.
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-08-13 16:44 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-12 9:14 howto keep xfs directory searches fast for a long time Michael Monnerie
2012-08-12 19:05 ` Peter Grandi
2012-08-12 19:35 ` Stan Hoeppner
2012-08-13 16:44 ` Michael Monnerie [this message]
2012-08-13 21:20 ` Stan Hoeppner
2012-08-13 23:56 ` Dave Chinner
2012-08-14 9:16 ` Michael Monnerie
2012-08-14 16:59 ` Stan Hoeppner
2012-08-15 8:59 ` Michael Monnerie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2561870.uQFC4XLYQm@saturn \
--to=michael.monnerie@is.it-management.at \
--cc=pg_xf2@xf2.for.sabi.co.uk \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox