From: Eric Sandeen <sandeen@sandeen.net>
To: Joe Landman <joe.landman@gmail.com>
Cc: linux-raid <linux-raid@vger.kernel.org>, xfs@oss.sgi.com
Subject: Re: very slow file deletion on an SSD
Date: Sat, 26 May 2012 21:43:22 -0500 [thread overview]
Message-ID: <4FC194CA.9070100@sandeen.net> (raw)
In-Reply-To: <4FC19408.5020502@sandeen.net>
On 5/26/12 9:40 PM, Eric Sandeen wrote:
> On 5/26/12 8:49 PM, Joe Landman wrote:
>> On 05/26/2012 08:07 PM, Dave Chinner wrote:
>>> On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
>>>> [root@siFlash test]# ls -alF | wc -l
>>>> 59
>>>> [root@siFlash test]# /usr/bin/time rm -f *
>>>> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
>>>> 2384maxresident)k
>>>> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
>>>
>>> It's burning an awful lot of CPU time during this remove.
>>>
>>>> [root@siFlash test]# ls -alF | wc -l
>>>> 48
>>>
>>> So, 48 files were removed, it was basically CPU bound and one took
>>> 2.6 seconds.
>>>
>>> So, how big are the files, and does the one that took 2.6s have tens
>>> of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
>>> for all the files)?
>>
>> Getting some sort of out of memory error with bmap
>>
>> [root@siFlash test]# ls -alF
>> total 50466476
>> drwxr-xr-x 2 root root 4096 May 26 21:40 ./
>> drwxr-xr-x 3 root root 17 May 26 19:32 ../
>> -rw------- 1 root root 1073741824 May 26 19:36 2.r.49.0
> ...
>
> <snip>
>
>> [root@siFlash test]# ls -alF > x
>>
>> [root@siFlash test]# xfs_bmap -vp x
>> x:
>> EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS
>> 0: [0..7]: 212681896..212681903 2 (7555752..7555759) 8 01111
>>
>> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
>> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
>
> Try filefrag -v maybe, if your e2fsprogs is new enough.
>
> Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
Ah.
f074211f xfs: fallback to vmalloc for large buffers in xfs_getbmap
fixed it in 3.4
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-05-27 2:43 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
2012-05-25 10:45 ` Bernd Schubert
2012-05-25 10:49 ` Joe Landman
2012-05-25 14:48 ` Roberto Spadim
2012-05-25 16:57 ` Ben Myers
2012-05-25 16:54 ` Joe Landman
2012-05-25 16:59 ` Christoph Hellwig
2012-05-26 16:00 ` David Brown
2012-05-26 19:56 ` Stan Hoeppner
2012-05-26 23:18 ` Dave Chinner
2012-05-26 23:25 ` Joe Landman
2012-05-27 0:07 ` Dave Chinner
2012-05-27 0:10 ` joe.landman
2012-05-27 1:49 ` Joe Landman
2012-05-27 2:40 ` Eric Sandeen
2012-05-27 2:43 ` Eric Sandeen [this message]
2012-05-27 7:34 ` Stefan Ring
2012-05-27 13:15 ` Krzysztof Adamski
2012-05-27 14:59 ` joe.landman
2012-05-27 16:07 ` Eric Sandeen
2012-05-27 17:14 ` Joe Landman
2012-05-27 17:17 ` Joe Landman
2012-05-26 23:55 ` Joe Landman
2012-05-27 0:07 ` Jon Nelson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FC194CA.9070100@sandeen.net \
--to=sandeen@sandeen.net \
--cc=joe.landman@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).