From: Michael Monnerie <michael.monnerie@is.it-management.at>
To: xfs@oss.sgi.com
Subject: deleting 2TB lots of files with delaylog: sync helps?
Date: Wed, 1 Sep 2010 01:30:41 +0200 [thread overview]
Message-ID: <201009010130.41500@zmi.at> (raw)
[-- Attachment #1.1: Type: text/plain, Size: 2202 bytes --]
I'm just trying the delaylog mount option on a filesystem (LVM over
2x 2TB 4K sector drives), and I see this while running 8 processes
of "rm -r * & 2>/dev/null":
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdc 2,80 33,40 125,00 64,60 720,00 939,30 17,50 0,55 2,91 1,71 32,40
sdd 0,00 25,60 122,80 63,40 662,40 874,40 16,51 0,52 2,77 1,96 36,54
dm-0 0,00 0,00 250,60 123,00 1382,40 1941,70 17,79 1,64 4,39 1,74 65,08
Then I issue "sync", and utilisation increases:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdc 0,00 0,20 15,80 175,40 84,00 2093,30 22,78 0,62 3,26 2,93 55,94
sdd 0,00 1,00 13,40 177,60 79,20 2114,10 22,97 0,69 3,63 3,34 63,80
dm-0 0,00 0,00 29,20 101,20 163,20 4207,40 67,03 1,11 8,51 7,56 98,60
This is reproducible. Now it can be that the sync just causes more writes and stalls reads
so overall it's slower, but I'm wondering why none of the devices says "100% util", which
should be the case on deletes? Or is this again the "mistake" of the utilization calculation
that writes do not really show up there?
I know I should have benchmarked and tested, I just wanted to raise eyes on this as it
could be possible there's something to optimize.
Another strange thing: After the 8 "rm -r" finished, there were some subdirs left over
that hadn't been deleted - running one "rm -r" cleaned them out then. Could that be
a problem with "delaylog"? Or can that happen when several "rm" compete in the same
dirs?
This is kernel 2.6.35.4
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31
****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html
// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next reply other threads:[~2010-08-31 23:30 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-31 23:30 Michael Monnerie [this message]
2010-09-01 0:06 ` deleting 2TB lots of files with delaylog: sync helps? Dave Chinner
2010-09-01 0:22 ` Michael Monnerie
2010-09-01 3:19 ` Dave Chinner
2010-09-01 4:42 ` Stan Hoeppner
2010-09-01 6:44 ` Dave Chinner
2010-09-02 5:37 ` Stan Hoeppner
2010-09-02 7:01 ` Dave Chinner
2010-09-02 8:41 ` Stan Hoeppner
2010-09-02 11:29 ` Dave Chinner
2010-09-02 14:57 ` Stan Hoeppner
2010-09-01 3:01 ` Stan Hoeppner
2010-09-01 3:41 ` Dave Chinner
2010-09-01 7:45 ` Michael Monnerie
2010-09-02 1:17 ` Dave Chinner
2010-09-02 2:15 ` Michael Monnerie
2010-09-02 7:51 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201009010130.41500@zmi.at \
--to=michael.monnerie@is.it-management.at \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox