* very slow file deletion on an SSD
@ 2012-05-25 10:37 Joe Landman
2012-05-25 10:45 ` Bernd Schubert
` (3 more replies)
0 siblings, 4 replies; 24+ messages in thread
From: Joe Landman @ 2012-05-25 10:37 UTC (permalink / raw)
To: xfs, linux-raid
Hi folks:
Just ran into this (see posted output at bottom). 3.2.14 kernel, MD
RAID 5, xfs file system. Not sure (precisely) where the problem is,
hence posting to both lists.
[root@siFlash ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md22 : active raid5 sdl[0] sds[7] sdx[6] sdu[5] sdk[4] sdz[3] sdw[2] sdr[1]
1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2 [8/8]
[UUUUUUUU]
md20 : active raid5 sdh[0] sdf[7] sdm[6] sdd[5] sdc[4] sde[3] sdi[2] sdg[1]
1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2 [8/8]
[UUUUUUUU]
md21 : active raid5 sdy[0] sdq[7] sdp[6] sdo[5] sdn[4] sdj[3] sdv[2] sdt[1]
1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2 [8/8]
[UUUUUUUU]
md0 : active raid1 sdb1[1] sda1[0]
93775800 blocks super 1.0 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md2* are SSD RAID5 arrays we are experimenting with. Xfs file systems
atop them:
[root@siFlash ~]# mount | grep md2
/dev/md20 on /data/1 type xfs (rw)
/dev/md21 on /data/2 type xfs (rw)
/dev/md22 on /data/3 type xfs (rw)
vanilla mount options (following Dave Chinner's long standing advice)
meta-data=/dev/md20 isize=2048 agcount=32,
agsize=12820392 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=410252304, imaxpct=5
= sunit=8 swidth=56 blks
naming =version 2 bsize=65536 ascii-ci=0
log =internal bsize=4096 blocks=30720, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@siFlash ~]# mdadm --detail /dev/md20
/dev/md20:
Version : 1.2
Creation Time : Sun Apr 1 19:36:39 2012
Raid Level : raid5
Array Size : 1641009216 (1564.99 GiB 1680.39 GB)
Used Dev Size : 234429888 (223.57 GiB 240.06 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistent
Update Time : Fri May 25 06:26:23 2012
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 32K
Name : siFlash.sicluster:20
UUID : 2f023323:6ec29eb9:a943de06:f6e0c25d
Events : 296
Number Major Minor RaidDevice State
0 8 112 0 active sync /dev/sdh
1 8 96 1 active sync /dev/sdg
2 8 128 2 active sync /dev/sdi
3 8 64 3 active sync /dev/sde
4 8 32 4 active sync /dev/sdc
5 8 48 5 active sync /dev/sdd
6 8 192 6 active sync /dev/sdm
7 8 80 7 active sync /dev/sdf
All the SSDs are on deadline scheduler
[root@siFlash ~]# cat /sys/block/sd*/queue/scheduler | uniq
noop [deadline] cfq
All this said, deletes from this unit are taking 1-2 seconds per file ...
[root@siFlash ~]# strace -ttt -T rm -f /data/2/test/*
1337941514.040788 execve("/bin/rm", ["rm", "-f",
"/data/2/test/2.8t-r.97.0", "/data/2/test/2.8t-r.98.0",
"/data/2/test/2.8t-r.99.0", "/data/2/test/2.9.0"], [/* 40 vars */]) = 0
<0.000552>
1337941514.041713 brk(0) = 0x60d000 <0.000031>
1337941514.041927 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2779000 <0.000032>
1337941514.042113 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No
such file or directory) <0.000109>
1337941514.042395 open("/etc/ld.so.cache", O_RDONLY) = 3 <0.000050>
1337941514.042614 fstat(3, {st_mode=S_IFREG|0644, st_size=81118, ...}) =
0 <0.000102>
1337941514.042928 mmap(NULL, 81118, PROT_READ, MAP_PRIVATE, 3, 0) =
0x7f7bc2765000 <0.000042>
1337941514.043078 close(3) = 0 <0.000019>
1337941514.043235 open("/lib64/libc.so.6", O_RDONLY) = 3 <0.000115>
1337941514.043477 read(3,
"\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360\355\301W4\0\0\0"...,
832) = 832 <0.000039>
1337941514.043647 fstat(3, {st_mode=S_IFREG|0755, st_size=1908792, ...})
= 0 <0.000020>
1337941514.043860 mmap(0x3457c00000, 3733672, PROT_READ|PROT_EXEC,
MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3457c00000 <0.000085>
1337941514.044065 mprotect(0x3457d86000, 2097152, PROT_NONE) = 0 <0.000034>
1337941514.044191 mmap(0x3457f86000, 20480, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x186000) = 0x3457f86000 <0.000034>
1337941514.044388 mmap(0x3457f8b000, 18600, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3457f8b000 <0.000085>
1337941514.044592 close(3) = 0 <0.000058>
1337941514.044763 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2764000 <0.000039>
1337941514.044893 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2763000 <0.000020>
1337941514.044981 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2762000 <0.000018>
1337941514.045076 arch_prctl(ARCH_SET_FS, 0x7f7bc2763700) = 0 <0.000018>
1337941514.045183 mprotect(0x3457f86000, 16384, PROT_READ) = 0 <0.000023>
1337941514.045270 mprotect(0x345761f000, 4096, PROT_READ) = 0 <0.000019>
1337941514.045350 munmap(0x7f7bc2765000, 81118) = 0 <0.000028>
1337941514.045619 brk(0) = 0x60d000 <0.000017>
1337941514.045698 brk(0x62e000) = 0x62e000 <0.000018>
1337941514.045803 open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
<0.000028>
1337941514.045904 fstat(3, {st_mode=S_IFREG|0644, st_size=99158704,
...}) = 0 <0.000017>
1337941514.046012 mmap(NULL, 99158704, PROT_READ, MAP_PRIVATE, 3, 0) =
0x7f7bbc8d1000 <0.000020>
1337941514.046099 close(3) = 0 <0.000017>
1337941514.046235 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost
isig icanon echo ...}) = 0 <0.000020>
1337941514.046373 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.97.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000024>
1337941514.046504 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.97.0", 0) = 0
<1.357571>
1337941515.404257 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.98.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000072>
1337941515.404485 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.98.0", 0) = 0
<1.608016>
1337941517.012706 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.99.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000082>
1337941517.012957 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.99.0", 0) = 0
<1.133890>
1337941518.146983 newfstatat(AT_FDCWD, "/data/2/test/2.9.0",
{st_mode=S_IFREG|0600, st_size=8589934592, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000023>
1337941518.147145 unlinkat(AT_FDCWD, "/data/2/test/2.9.0", 0) = 0 <0.938754>
1337941519.086125 close(0) = 0 <0.000102>
1337941519.086357 close(1) = 0 <0.000061>
1337941519.086540 close(2) = 0 <0.000021>
1337941519.086694 exit_group(0) = ?
Anything obvious that we are doing wrong?
Machine may be occupied for a bit. Might be a few days before we can
get results back.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
@ 2012-05-25 10:45 ` Bernd Schubert
2012-05-25 10:49 ` Joe Landman
2012-05-25 16:57 ` Ben Myers
` (2 subsequent siblings)
3 siblings, 1 reply; 24+ messages in thread
From: Bernd Schubert @ 2012-05-25 10:45 UTC (permalink / raw)
To: Joe Landman; +Cc: xfs, linux-raid
Hello Joe,
On 05/25/2012 12:37 PM, Joe Landman wrote:
> Hi folks:
>
> Just ran into this (see posted output at bottom). 3.2.14 kernel, MD RAID
> 5, xfs file system. Not sure (precisely) where the problem is, hence
> posting to both lists.
do you have anything enabled, that would make your files to have xattrs?
http://oss.sgi.com/archives/xfs/2011-08/msg00233.html
Cheers,
Bernd
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:45 ` Bernd Schubert
@ 2012-05-25 10:49 ` Joe Landman
2012-05-25 14:48 ` Roberto Spadim
0 siblings, 1 reply; 24+ messages in thread
From: Joe Landman @ 2012-05-25 10:49 UTC (permalink / raw)
To: Bernd Schubert; +Cc: xfs, linux-raid
On 05/25/2012 06:45 AM, Bernd Schubert wrote:
> Hello Joe,
>
> On 05/25/2012 12:37 PM, Joe Landman wrote:
>> Hi folks:
>>
>> Just ran into this (see posted output at bottom). 3.2.14 kernel, MD RAID
>> 5, xfs file system. Not sure (precisely) where the problem is, hence
>> posting to both lists.
>
> do you have anything enabled, that would make your files to have xattrs?
Not to my knowledge. I'll go back and re-create some files and see.
> http://oss.sgi.com/archives/xfs/2011-08/msg00233.html
Hmmm
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:49 ` Joe Landman
@ 2012-05-25 14:48 ` Roberto Spadim
0 siblings, 0 replies; 24+ messages in thread
From: Roberto Spadim @ 2012-05-25 14:48 UTC (permalink / raw)
To: Joe Landman; +Cc: Bernd Schubert, xfs, linux-raid
noop scheduler could be better for ssd?
2012/5/25 Joe Landman <joe.landman@gmail.com>:
> On 05/25/2012 06:45 AM, Bernd Schubert wrote:
>>
>> Hello Joe,
>>
>> On 05/25/2012 12:37 PM, Joe Landman wrote:
>>>
>>> Hi folks:
>>>
>>> Just ran into this (see posted output at bottom). 3.2.14 kernel, MD RAID
>>> 5, xfs file system. Not sure (precisely) where the problem is, hence
>>> posting to both lists.
>>
>>
>> do you have anything enabled, that would make your files to have xattrs?
>
>
> Not to my knowledge. I'll go back and re-create some files and see.
>
>> http://oss.sgi.com/archives/xfs/2011-08/msg00233.html
>
>
> Hmmm
>
>
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman@scalableinformatics.com
> web : http://scalableinformatics.com
> http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 16:57 ` Ben Myers
@ 2012-05-25 16:54 ` Joe Landman
2012-05-25 16:59 ` Christoph Hellwig
1 sibling, 0 replies; 24+ messages in thread
From: Joe Landman @ 2012-05-25 16:54 UTC (permalink / raw)
To: Ben Myers; +Cc: xfs, linux-raid
On 05/25/2012 12:57 PM, Ben Myers wrote:
> Hey Joe,
[...]
> There are a couple recent fixes related to discard that are probably
> appropriate for 3.2-stable.
[...]
> I'm not saying that they'll fix your problem, but you should consider giving
> them a whirl.
Ok, will do a respin over the weekend and try from there. Thanks!
Joe
>
> Regards,
> Ben
>
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
2012-05-25 10:45 ` Bernd Schubert
@ 2012-05-25 16:57 ` Ben Myers
2012-05-25 16:54 ` Joe Landman
2012-05-25 16:59 ` Christoph Hellwig
2012-05-26 19:56 ` Stan Hoeppner
2012-05-26 23:18 ` Dave Chinner
3 siblings, 2 replies; 24+ messages in thread
From: Ben Myers @ 2012-05-25 16:57 UTC (permalink / raw)
To: Joe Landman; +Cc: xfs, linux-raid
Hey Joe,
On Fri, May 25, 2012 at 06:37:05AM -0400, Joe Landman wrote:
> Just ran into this (see posted output at bottom). 3.2.14 kernel,
> MD RAID 5, xfs file system. Not sure (precisely) where the problem
> is, hence posting to both lists.
>
> [root@siFlash ~]# cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md22 : active raid5 sdl[0] sds[7] sdx[6] sdu[5] sdk[4] sdz[3] sdw[2] sdr[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md20 : active raid5 sdh[0] sdf[7] sdm[6] sdd[5] sdc[4] sde[3] sdi[2] sdg[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md21 : active raid5 sdy[0] sdq[7] sdp[6] sdo[5] sdn[4] sdj[3] sdv[2] sdt[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 93775800 blocks super 1.0 [2/2] [UU]
> bitmap: 1/1 pages [4KB], 65536KB chunk
>
>
> md2* are SSD RAID5 arrays we are experimenting with. Xfs file
> systems atop them:
>
> [root@siFlash ~]# mount | grep md2
> /dev/md20 on /data/1 type xfs (rw)
> /dev/md21 on /data/2 type xfs (rw)
> /dev/md22 on /data/3 type xfs (rw)
>
> vanilla mount options (following Dave Chinner's long standing advice)
>
> meta-data=/dev/md20 isize=2048 agcount=32,
> agsize=12820392 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=410252304, imaxpct=5
> = sunit=8 swidth=56 blks
> naming =version 2 bsize=65536 ascii-ci=0
> log =internal bsize=4096 blocks=30720, version=2
> = sectsz=512 sunit=8 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> [root@siFlash ~]# mdadm --detail /dev/md20
> /dev/md20:
> Version : 1.2
> Creation Time : Sun Apr 1 19:36:39 2012
> Raid Level : raid5
> Array Size : 1641009216 (1564.99 GiB 1680.39 GB)
> Used Dev Size : 234429888 (223.57 GiB 240.06 GB)
> Raid Devices : 8
> Total Devices : 8
> Persistence : Superblock is persistent
>
> Update Time : Fri May 25 06:26:23 2012
> State : clean
> Active Devices : 8
> Working Devices : 8
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 32K
>
> Name : siFlash.sicluster:20
> UUID : 2f023323:6ec29eb9:a943de06:f6e0c25d
> Events : 296
>
> Number Major Minor RaidDevice State
> 0 8 112 0 active sync /dev/sdh
> 1 8 96 1 active sync /dev/sdg
> 2 8 128 2 active sync /dev/sdi
> 3 8 64 3 active sync /dev/sde
> 4 8 32 4 active sync /dev/sdc
> 5 8 48 5 active sync /dev/sdd
> 6 8 192 6 active sync /dev/sdm
> 7 8 80 7 active sync /dev/sdf
>
> All the SSDs are on deadline scheduler
>
> [root@siFlash ~]# cat /sys/block/sd*/queue/scheduler | uniq
> noop [deadline] cfq
>
>
> All this said, deletes from this unit are taking 1-2 seconds per file ...
>
> [root@siFlash ~]# strace -ttt -T rm -f /data/2/test/*
> 1337941514.040788 execve("/bin/rm", ["rm", "-f",
> "/data/2/test/2.8t-r.97.0", "/data/2/test/2.8t-r.98.0",
> "/data/2/test/2.8t-r.99.0", "/data/2/test/2.9.0"], [/* 40 vars */])
> = 0 <0.000552>
> 1337941514.041713 brk(0) = 0x60d000 <0.000031>
> 1337941514.041927 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2779000 <0.000032>
> 1337941514.042113 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No
> such file or directory) <0.000109>
> 1337941514.042395 open("/etc/ld.so.cache", O_RDONLY) = 3 <0.000050>
> 1337941514.042614 fstat(3, {st_mode=S_IFREG|0644, st_size=81118,
> ...}) = 0 <0.000102>
> 1337941514.042928 mmap(NULL, 81118, PROT_READ, MAP_PRIVATE, 3, 0) =
> 0x7f7bc2765000 <0.000042>
> 1337941514.043078 close(3) = 0 <0.000019>
> 1337941514.043235 open("/lib64/libc.so.6", O_RDONLY) = 3 <0.000115>
> 1337941514.043477 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360\355\301W4\0\0\0"...,
> 832) = 832 <0.000039>
> 1337941514.043647 fstat(3, {st_mode=S_IFREG|0755, st_size=1908792,
> ...}) = 0 <0.000020>
> 1337941514.043860 mmap(0x3457c00000, 3733672, PROT_READ|PROT_EXEC,
> MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3457c00000 <0.000085>
> 1337941514.044065 mprotect(0x3457d86000, 2097152, PROT_NONE) = 0 <0.000034>
> 1337941514.044191 mmap(0x3457f86000, 20480, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x186000) = 0x3457f86000
> <0.000034>
> 1337941514.044388 mmap(0x3457f8b000, 18600, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3457f8b000
> <0.000085>
> 1337941514.044592 close(3) = 0 <0.000058>
> 1337941514.044763 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2764000 <0.000039>
> 1337941514.044893 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2763000 <0.000020>
> 1337941514.044981 mmap(NULL, 4096, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7bc2762000 <0.000018>
> 1337941514.045076 arch_prctl(ARCH_SET_FS, 0x7f7bc2763700) = 0 <0.000018>
> 1337941514.045183 mprotect(0x3457f86000, 16384, PROT_READ) = 0 <0.000023>
> 1337941514.045270 mprotect(0x345761f000, 4096, PROT_READ) = 0 <0.000019>
> 1337941514.045350 munmap(0x7f7bc2765000, 81118) = 0 <0.000028>
> 1337941514.045619 brk(0) = 0x60d000 <0.000017>
> 1337941514.045698 brk(0x62e000) = 0x62e000 <0.000018>
> 1337941514.045803 open("/usr/lib/locale/locale-archive", O_RDONLY) =
> 3 <0.000028>
> 1337941514.045904 fstat(3, {st_mode=S_IFREG|0644, st_size=99158704,
> ...}) = 0 <0.000017>
> 1337941514.046012 mmap(NULL, 99158704, PROT_READ, MAP_PRIVATE, 3, 0)
> = 0x7f7bbc8d1000 <0.000020>
> 1337941514.046099 close(3) = 0 <0.000017>
> 1337941514.046235 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400
> opost isig icanon echo ...}) = 0 <0.000020>
> 1337941514.046373 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.97.0",
> {st_mode=S_IFREG|0600, st_size=1073741824, ...},
> AT_SYMLINK_NOFOLLOW) = 0 <0.000024>
> 1337941514.046504 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.97.0", 0)
> = 0 <1.357571>
> 1337941515.404257 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.98.0",
> {st_mode=S_IFREG|0600, st_size=1073741824, ...},
> AT_SYMLINK_NOFOLLOW) = 0 <0.000072>
> 1337941515.404485 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.98.0", 0)
> = 0 <1.608016>
> 1337941517.012706 newfstatat(AT_FDCWD, "/data/2/test/2.8t-r.99.0",
> {st_mode=S_IFREG|0600, st_size=1073741824, ...},
> AT_SYMLINK_NOFOLLOW) = 0 <0.000082>
> 1337941517.012957 unlinkat(AT_FDCWD, "/data/2/test/2.8t-r.99.0", 0)
> = 0 <1.133890>
> 1337941518.146983 newfstatat(AT_FDCWD, "/data/2/test/2.9.0",
> {st_mode=S_IFREG|0600, st_size=8589934592, ...},
> AT_SYMLINK_NOFOLLOW) = 0 <0.000023>
> 1337941518.147145 unlinkat(AT_FDCWD, "/data/2/test/2.9.0", 0) = 0 <0.938754>
> 1337941519.086125 close(0) = 0 <0.000102>
> 1337941519.086357 close(1) = 0 <0.000061>
> 1337941519.086540 close(2) = 0 <0.000021>
> 1337941519.086694 exit_group(0) = ?
>
> Anything obvious that we are doing wrong?
There are a couple recent fixes related to discard that are probably
appropriate for 3.2-stable.
commit b1c770c273a4787069306fc82aab245e9ac72e9d
Author: Dave Chinner <dchinner@redhat.com>
Date: Wed Dec 21 00:07:42 2011 +0000
xfs: fix endian conversion issue in discard code
When finding the longest extent in an AG, we read the value directly
out of the AGF buffer without endian conversion. This will give an
incorrect length, resulting in FITRIM operations potentially not
trimming everything that it should.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
commit a66d636385d621e98a915233250356c394a437de
Author: Dave Chinner <dchinner@redhat.com>
Date: Thu Mar 22 05:15:12 2012 +0000
xfs: fix fstrim offset calculations
xfs_ioc_fstrim() doesn't treat the incoming offset and length
correctly. It treats them as a filesystem block address, rather than
a disk address. This is wrong because the range passed in is a
linear representation, while the filesystem block address notation
is a sparse representation. Hence we cannot convert the range direct
to filesystem block units and then use that for calculating the
range to trim.
While this sounds dangerous, the problem is limited to calculating
what AGs need to be trimmed. The code that calcuates the actual
ranges to trim gets the right result (i.e. only ever discards free
space), even though it uses the wrong ranges to limit what is
trimmed. Hence this is not a bug that endangers user data.
Fix this by treating the range as a disk address range and use the
appropriate functions to convert the range into the desired formats
for calculations.
Further, fix the first free extent lookup (the longest) to actually
find the largest free extent. Currently this lookup uses a <=
lookup, which results in finding the extent to the left of the
largest because we can never get an exact match on the largest
extent. This is due to the fact that while we know it's size, we
don't know it's location and so the exact match fails and we move
one record to the left to get the next largest extent. Instead, use
a >= search so that the lookup returns the largest extent regardless
of the fact we don't get an exact match on it.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
I'm not saying that they'll fix your problem, but you should consider giving
them a whirl.
Regards,
Ben
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 16:57 ` Ben Myers
2012-05-25 16:54 ` Joe Landman
@ 2012-05-25 16:59 ` Christoph Hellwig
2012-05-26 16:00 ` David Brown
1 sibling, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2012-05-25 16:59 UTC (permalink / raw)
To: Ben Myers; +Cc: Joe Landman, linux-raid, xfs
On Fri, May 25, 2012 at 11:57:19AM -0500, Ben Myers wrote:
> There are a couple recent fixes related to discard that are probably
> appropriate for 3.2-stable.
Discard is not enabled by default.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 16:59 ` Christoph Hellwig
@ 2012-05-26 16:00 ` David Brown
0 siblings, 0 replies; 24+ messages in thread
From: David Brown @ 2012-05-26 16:00 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Ben Myers, Joe Landman, linux-raid, xfs
On 25/05/12 18:59, Christoph Hellwig wrote:
> On Fri, May 25, 2012 at 11:57:19AM -0500, Ben Myers wrote:
>> There are a couple recent fixes related to discard that are probably
>> appropriate for 3.2-stable.
>
> Discard is not enabled by default.
>
You would not want to enable discard if slow deletes are an issue -
benchmarks (on ext4) show that enabling trim makes metadata operations,
and delete in particular, much slower.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
2012-05-25 10:45 ` Bernd Schubert
2012-05-25 16:57 ` Ben Myers
@ 2012-05-26 19:56 ` Stan Hoeppner
2012-05-26 23:18 ` Dave Chinner
3 siblings, 0 replies; 24+ messages in thread
From: Stan Hoeppner @ 2012-05-26 19:56 UTC (permalink / raw)
To: Joe Landman; +Cc: xfs, linux-raid
On 5/25/2012 5:37 AM, Joe Landman wrote:
> Hi folks:
Hi Joe,
I may be all wet here, but this is the only thing that jumped out at me:
> meta-data=/dev/md20
> data = sunit=8 swidth=56 blks
> log =internal sunit=8 blks
8*56KB (or 7*64KB) = 448KB write out
> [root@siFlash ~]# mdadm --detail /dev/md20
> /dev/md20:
> Version : 1.2
> Raid Devices : 8
> Layout : left-symmetric
> Chunk Size : 32K
7*32KB = 224KB stripe size
> All this said, deletes from this unit are taking 1-2 seconds per file ...
> Anything obvious that we are doing wrong?
Other than your XFS writeout being apparently twice the size of your
RAID stripe, nothing else stands out that I can see. I'm not a block
layer expert. And since mkfs.xfs came up with 448KB, apparently this
should be fine. However, everything in the literature, and the experts
on this list, tells us to always precisely align XFS to the RAID stripe.
That doesn't seem to be the case here.
--
Stan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
` (2 preceding siblings ...)
2012-05-26 19:56 ` Stan Hoeppner
@ 2012-05-26 23:18 ` Dave Chinner
2012-05-26 23:25 ` Joe Landman
2012-05-26 23:55 ` Joe Landman
3 siblings, 2 replies; 24+ messages in thread
From: Dave Chinner @ 2012-05-26 23:18 UTC (permalink / raw)
To: Joe Landman; +Cc: linux-raid, xfs
On Fri, May 25, 2012 at 06:37:05AM -0400, Joe Landman wrote:
> Hi folks:
>
> Just ran into this (see posted output at bottom). 3.2.14 kernel,
> MD RAID 5, xfs file system. Not sure (precisely) where the problem
> is, hence posting to both lists.
>
> [root@siFlash ~]# cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md22 : active raid5 sdl[0] sds[7] sdx[6] sdu[5] sdk[4] sdz[3] sdw[2] sdr[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md20 : active raid5 sdh[0] sdf[7] sdm[6] sdd[5] sdc[4] sde[3] sdi[2] sdg[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md21 : active raid5 sdy[0] sdq[7] sdp[6] sdo[5] sdn[4] sdj[3] sdv[2] sdt[1]
> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
> [8/8] [UUUUUUUU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 93775800 blocks super 1.0 [2/2] [UU]
> bitmap: 1/1 pages [4KB], 65536KB chunk
>
>
> md2* are SSD RAID5 arrays we are experimenting with. Xfs file
> systems atop them:
>
> [root@siFlash ~]# mount | grep md2
> /dev/md20 on /data/1 type xfs (rw)
> /dev/md21 on /data/2 type xfs (rw)
> /dev/md22 on /data/3 type xfs (rw)
>
> vanilla mount options (following Dave Chinner's long standing advice)
>
> meta-data=/dev/md20 isize=2048 agcount=32,
> agsize=12820392 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=410252304, imaxpct=5
> = sunit=8 swidth=56 blks
> naming =version 2 bsize=65536 ascii-ci=0
> log =internal bsize=4096 blocks=30720, version=2
> = sectsz=512 sunit=8 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
But you haven't followed my advice when it comes to using default
mkfs options, have you? You're running 2k inodes and 64k directory
block size, which is not exactly a common config
The question is, why do you have these options configured, and are
they responsible for things being slow?
> All this said, deletes from this unit are taking 1-2 seconds per file ...
Sounds like you might be hitting the synchronous xattr removal
problem that was recently fixed (as has been mentioned already), but
even so 2 IOs don't take 1-2s to do, unless the MD RAID5 barrier
implementation is really that bad. If you mount -o nobarrier, what
happens?
CHeers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-26 23:18 ` Dave Chinner
@ 2012-05-26 23:25 ` Joe Landman
2012-05-27 0:07 ` Dave Chinner
2012-05-26 23:55 ` Joe Landman
1 sibling, 1 reply; 24+ messages in thread
From: Joe Landman @ 2012-05-26 23:25 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs, linux-raid
On 05/26/2012 07:18 PM, Dave Chinner wrote:
> On Fri, May 25, 2012 at 06:37:05AM -0400, Joe Landman wrote:
>> Hi folks:
>>
>> Just ran into this (see posted output at bottom). 3.2.14 kernel,
>> MD RAID 5, xfs file system. Not sure (precisely) where the problem
>> is, hence posting to both lists.
>>
>> [root@siFlash ~]# cat /proc/mdstat
>> Personalities : [raid1] [raid6] [raid5] [raid4]
>> md22 : active raid5 sdl[0] sds[7] sdx[6] sdu[5] sdk[4] sdz[3] sdw[2] sdr[1]
>> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
>> [8/8] [UUUUUUUU]
>>
>> md20 : active raid5 sdh[0] sdf[7] sdm[6] sdd[5] sdc[4] sde[3] sdi[2] sdg[1]
>> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
>> [8/8] [UUUUUUUU]
>>
>> md21 : active raid5 sdy[0] sdq[7] sdp[6] sdo[5] sdn[4] sdj[3] sdv[2] sdt[1]
>> 1641009216 blocks super 1.2 level 5, 32k chunk, algorithm 2
>> [8/8] [UUUUUUUU]
>>
>> md0 : active raid1 sdb1[1] sda1[0]
>> 93775800 blocks super 1.0 [2/2] [UU]
>> bitmap: 1/1 pages [4KB], 65536KB chunk
>>
>>
>> md2* are SSD RAID5 arrays we are experimenting with. Xfs file
>> systems atop them:
>>
>> [root@siFlash ~]# mount | grep md2
>> /dev/md20 on /data/1 type xfs (rw)
>> /dev/md21 on /data/2 type xfs (rw)
>> /dev/md22 on /data/3 type xfs (rw)
>>
>> vanilla mount options (following Dave Chinner's long standing advice)
>>
>> meta-data=/dev/md20 isize=2048 agcount=32,
>> agsize=12820392 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=410252304, imaxpct=5
>> = sunit=8 swidth=56 blks
>> naming =version 2 bsize=65536 ascii-ci=0
>> log =internal bsize=4096 blocks=30720, version=2
>> = sectsz=512 sunit=8 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>
> But you haven't followed my advice when it comes to using default
> mkfs options, have you? You're running 2k inodes and 64k directory
> block size, which is not exactly a common config
We were experimenting. Easy to set it back and demonstrate the problem
again.
>
> The question is, why do you have these options configured, and are
> they responsible for things being slow?
>
We saw it before we experimented with some mkfs options. Will rebuild
FS and demo it again.
>> All this said, deletes from this unit are taking 1-2 seconds per file ...
>
> Sounds like you might be hitting the synchronous xattr removal
> problem that was recently fixed (as has been mentioned already), but
> even so 2 IOs don't take 1-2s to do, unless the MD RAID5 barrier
> implementation is really that bad. If you mount -o nobarrier, what
> happens?
[root@siFlash test]# ls -alF | wc -l
59
[root@siFlash test]# /usr/bin/time rm -f *
^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
2384maxresident)k
25352inputs+0outputs (0major+179minor)pagefaults 0swaps
[root@siFlash test]# ls -alF | wc -l
48
Nope, still an issue:
1338074901.531554 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost
isig icanon echo ...}) = 0 <0.000021>
1338074901.531701 newfstatat(AT_FDCWD, "1.r.12.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000022>
1338074901.531840 unlinkat(AT_FDCWD, "1.r.12.0", 0) = 0 <2.586999>
1338074904.119032 newfstatat(AT_FDCWD, "1.r.13.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000033>
2.6 seconds for an unlink.
Rebuilding absolutely vanilla file system now, and will rerun checks.
>
> CHeers,
>
> Dave.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-26 23:18 ` Dave Chinner
2012-05-26 23:25 ` Joe Landman
@ 2012-05-26 23:55 ` Joe Landman
2012-05-27 0:07 ` Jon Nelson
1 sibling, 1 reply; 24+ messages in thread
From: Joe Landman @ 2012-05-26 23:55 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs, linux-raid
On 05/26/2012 07:18 PM, Dave Chinner wrote:
> Sounds like you might be hitting the synchronous xattr removal
> problem that was recently fixed (as has been mentioned already), but
> even so 2 IOs don't take 1-2s to do, unless the MD RAID5 barrier
> implementation is really that bad. If you mount -o nobarrier, what
> happens?
Pure vanilla mkfs
[root@siFlash ~]# mkfs.xfs -f /dev/md20
meta-data=/dev/md20 isize=256 agcount=32,
agsize=12820384 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=410252288, imaxpct=5
= sunit=4 swidth=28 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=200320, version=2
= sectsz=512 sunit=4 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@siFlash ~]# mkfs.xfs -f /dev/md21
meta-data=/dev/md21 isize=256 agcount=32,
agsize=12820384 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=410252288, imaxpct=5
= sunit=4 swidth=28 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=200320, version=2
= sectsz=512 sunit=4 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
^[[A[root@siFlash ~]# mkfs.xfs -f /dev/md22
meta-data=/dev/md22 isize=256 agcount=32,
agsize=12820384 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=410252288, imaxpct=5
= sunit=4 swidth=28 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=200320, version=2
= sectsz=512 sunit=4 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
and mount
[root@siFlash ~]# mount /dev/md20 /data/1
[root@siFlash ~]# mount /dev/md21 /data/2
[root@siFlash ~]# mount /dev/md22 /data/3
Still an issue:
[root@siFlash test]# ls -l | wc -l
48
[root@siFlash test]# /usr/bin/time rm -f *
^C0.00user 5.02system 0:05.33elapsed 94%CPU (0avgtext+0avgdata
2368maxresident)k
24inputs+0outputs (0major+179minor)pagefaults 0swaps
[root@siFlash test]# ls -l | wc -l
46
[root@siFlash test]#
though now its 3.5 seconds per file delete
1338075592.450387 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost
isig icanon echo ...}) = 0 <0.000020>
1338075592.450541 newfstatat(AT_FDCWD, "1.r.12.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000020>
1338075592.450679 unlinkat(AT_FDCWD, "1.r.12.0", 0) = 0 <3.226394>
1338075595.677274 newfstatat(AT_FDCWD, "1.r.13.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000088>
1338075595.677515 unlinkat(AT_FDCWD, "1.r.13.0", 0) = 0 <3.564176>
Remounting with nobarrier
[root@siFlash test]# mount -o remount,nobarrier /data/1
[root@siFlash test]# mount -o remount,nobarrier /data/2
[root@siFlash test]# mount -o remount,nobarrier /data/3
[root@siFlash test]# mount | grep data
/dev/md20 on /data/1 type xfs (rw,nobarrier)
/dev/md21 on /data/2 type xfs (rw,nobarrier)
/dev/md22 on /data/3 type xfs (rw,nobarrier)
doesn't look like this helped
1338075724.110941 newfstatat(AT_FDCWD, "1.r.15.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000035>
1338075724.111108 unlinkat(AT_FDCWD, "1.r.15.0", 0) = 0 <3.727094>
1338075727.838380 newfstatat(AT_FDCWD, "1.r.16.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000061>
1338075727.838600 unlinkat(AT_FDCWD, "1.r.16.0", 0) = 0 <2.611156>
1338075730.449949 newfstatat(AT_FDCWD, "1.r.17.0",
{st_mode=S_IFREG|0600, st_size=1073741824, ...}, AT_SYMLINK_NOFOLLOW) =
0 <0.000104>
1338075730.450165 unlinkat(AT_FDCWD, "1.r.17.0", 0) = 0 <2.869917>
2.6-3.7 seconds per unlink.
FWIW: umount (which does flushes) seems to take a while (~15-20 seconds)
Raw (uncached) read/write speed to a single array is pretty good, so I
don't think the array is a problem.
Run status group 0 (all jobs):
READ: io=81424MB, aggrb=2606.7MB/s, minb=2606.7MB/s,
maxb=2606.7MB/s, mint=31244msec, maxt=31244msec
Run status group 0 (all jobs):
WRITE: io=55025MB, aggrb=939053KB/s, minb=939053KB/s,
maxb=939053KB/s, mint=60002msec, maxt=60002msec
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-26 23:25 ` Joe Landman
@ 2012-05-27 0:07 ` Dave Chinner
2012-05-27 0:10 ` joe.landman
2012-05-27 1:49 ` Joe Landman
0 siblings, 2 replies; 24+ messages in thread
From: Dave Chinner @ 2012-05-27 0:07 UTC (permalink / raw)
To: Joe Landman; +Cc: xfs, linux-raid
On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
> [root@siFlash test]# ls -alF | wc -l
> 59
> [root@siFlash test]# /usr/bin/time rm -f *
> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
> 2384maxresident)k
> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
It's burning an awful lot of CPU time during this remove.
> [root@siFlash test]# ls -alF | wc -l
> 48
So, 48 files were removed, it was basically CPU bound and one took
2.6 seconds.
So, how big are the files, and does the one that took 2.6s have tens
of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
for all the files)?
if not, can you use perf top to get an ida of the CPU usage profile
duing the rm by doing:
# perf record rm -f *
.....
and capturing the profile via:
# perf report > profile.txt
And attaching te profile.txt file so we can see where all the CPU
time is being spent? You can find perf in your kernel source tree
under the tools subdir....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-26 23:55 ` Joe Landman
@ 2012-05-27 0:07 ` Jon Nelson
0 siblings, 0 replies; 24+ messages in thread
From: Jon Nelson @ 2012-05-27 0:07 UTC (permalink / raw)
To: Joe Landman; +Cc: linux-raid
On Sat, May 26, 2012 at 6:55 PM, Joe Landman <joe.landman@gmail.com> wrote:
> On 05/26/2012 07:18 PM, Dave Chinner wrote:
>
>> Sounds like you might be hitting the synchronous xattr removal
>> problem that was recently fixed (as has been mentioned already), but
>> even so 2 IOs don't take 1-2s to do, unless the MD RAID5 barrier
>> implementation is really that bad. If you mount -o nobarrier, what
>> happens?
Consider trying btrfs to eliminate the filesystem from the equation.
It should auto-detect that it's on an SSD.
--
Jon
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 0:07 ` Dave Chinner
@ 2012-05-27 0:10 ` joe.landman
2012-05-27 1:49 ` Joe Landman
1 sibling, 0 replies; 24+ messages in thread
From: joe.landman @ 2012-05-27 0:10 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs@oss.sgi.com, linux-raid
Ok. Will do in a few hours
Sent from my iPad
On May 26, 2012, at 8:07 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
>> [root@siFlash test]# ls -alF | wc -l
>> 59
>> [root@siFlash test]# /usr/bin/time rm -f *
>> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
>> 2384maxresident)k
>> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
>
> It's burning an awful lot of CPU time during this remove.
>
>> [root@siFlash test]# ls -alF | wc -l
>> 48
>
> So, 48 files were removed, it was basically CPU bound and one took
> 2.6 seconds.
>
> So, how big are the files, and does the one that took 2.6s have tens
> of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
> for all the files)?
>
> if not, can you use perf top to get an ida of the CPU usage profile
> duing the rm by doing:
>
> # perf record rm -f *
> .....
>
> and capturing the profile via:
>
> # perf report > profile.txt
>
> And attaching te profile.txt file so we can see where all the CPU
> time is being spent? You can find perf in your kernel source tree
> under the tools subdir....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 0:07 ` Dave Chinner
2012-05-27 0:10 ` joe.landman
@ 2012-05-27 1:49 ` Joe Landman
2012-05-27 2:40 ` Eric Sandeen
1 sibling, 1 reply; 24+ messages in thread
From: Joe Landman @ 2012-05-27 1:49 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs, linux-raid
On 05/26/2012 08:07 PM, Dave Chinner wrote:
> On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
>> [root@siFlash test]# ls -alF | wc -l
>> 59
>> [root@siFlash test]# /usr/bin/time rm -f *
>> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
>> 2384maxresident)k
>> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
>
> It's burning an awful lot of CPU time during this remove.
>
>> [root@siFlash test]# ls -alF | wc -l
>> 48
>
> So, 48 files were removed, it was basically CPU bound and one took
> 2.6 seconds.
>
> So, how big are the files, and does the one that took 2.6s have tens
> of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
> for all the files)?
Getting some sort of out of memory error with bmap
[root@siFlash test]# ls -alF
total 50466476
drwxr-xr-x 2 root root 4096 May 26 21:40 ./
drwxr-xr-x 3 root root 17 May 26 19:32 ../
-rw------- 1 root root 1073741824 May 26 19:36 2.r.49.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.50.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.51.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.52.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.53.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.54.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.55.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.56.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.57.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.58.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.59.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.60.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.61.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.62.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.63.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.64.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.65.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.66.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.67.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.68.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.69.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.70.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.71.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.72.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.73.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.74.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.75.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.76.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.77.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.78.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.79.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.80.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.81.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.82.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.83.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.84.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.85.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.86.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.87.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.88.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.89.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.90.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.91.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.92.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.93.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.94.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.95.0
-rw------- 1 root root 1073741824 May 26 19:36 2.r.96.0
-rw-r--r-- 1 root root 0 May 26 21:40 x
[root@siFlash test]# ls -alF > x
[root@siFlash test]# xfs_bmap -vp x
x:
EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL
FLAGS
0: [0..7]: 212681896..212681903 2 (7555752..7555759) 8
01111
[root@siFlash test]# xfs_bmap -vp 2.r.96.0
xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot
allocate memory
These are 3.1.8 from git (had same error with earlier version).
>
> if not, can you use perf top to get an ida of the CPU usage profile
> duing the rm by doing:
>
> # perf record rm -f *
> .....
>
> and capturing the profile via:
>
> # perf report> profile.txt
>
> And attaching te profile.txt file so we can see where all the CPU
> time is being spent? You can find perf in your kernel source tree
> under the tools subdir....
>
> Cheers,
>
> Dave.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 1:49 ` Joe Landman
@ 2012-05-27 2:40 ` Eric Sandeen
2012-05-27 2:43 ` Eric Sandeen
2012-05-27 7:34 ` Stefan Ring
0 siblings, 2 replies; 24+ messages in thread
From: Eric Sandeen @ 2012-05-27 2:40 UTC (permalink / raw)
To: Joe Landman; +Cc: linux-raid, xfs
On 5/26/12 8:49 PM, Joe Landman wrote:
> On 05/26/2012 08:07 PM, Dave Chinner wrote:
>> On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
>>> [root@siFlash test]# ls -alF | wc -l
>>> 59
>>> [root@siFlash test]# /usr/bin/time rm -f *
>>> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
>>> 2384maxresident)k
>>> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
>>
>> It's burning an awful lot of CPU time during this remove.
>>
>>> [root@siFlash test]# ls -alF | wc -l
>>> 48
>>
>> So, 48 files were removed, it was basically CPU bound and one took
>> 2.6 seconds.
>>
>> So, how big are the files, and does the one that took 2.6s have tens
>> of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
>> for all the files)?
>
> Getting some sort of out of memory error with bmap
>
> [root@siFlash test]# ls -alF
> total 50466476
> drwxr-xr-x 2 root root 4096 May 26 21:40 ./
> drwxr-xr-x 3 root root 17 May 26 19:32 ../
> -rw------- 1 root root 1073741824 May 26 19:36 2.r.49.0
...
<snip>
> [root@siFlash test]# ls -alF > x
>
> [root@siFlash test]# xfs_bmap -vp x
> x:
> EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS
> 0: [0..7]: 212681896..212681903 2 (7555752..7555759) 8 01111
>
> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
Try filefrag -v maybe, if your e2fsprogs is new enough.
Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 2:40 ` Eric Sandeen
@ 2012-05-27 2:43 ` Eric Sandeen
2012-05-27 7:34 ` Stefan Ring
1 sibling, 0 replies; 24+ messages in thread
From: Eric Sandeen @ 2012-05-27 2:43 UTC (permalink / raw)
To: Joe Landman; +Cc: linux-raid, xfs
On 5/26/12 9:40 PM, Eric Sandeen wrote:
> On 5/26/12 8:49 PM, Joe Landman wrote:
>> On 05/26/2012 08:07 PM, Dave Chinner wrote:
>>> On Sat, May 26, 2012 at 07:25:55PM -0400, Joe Landman wrote:
>>>> [root@siFlash test]# ls -alF | wc -l
>>>> 59
>>>> [root@siFlash test]# /usr/bin/time rm -f *
>>>> ^C0.00user 8.46system 0:09.55elapsed 88%CPU (0avgtext+0avgdata
>>>> 2384maxresident)k
>>>> 25352inputs+0outputs (0major+179minor)pagefaults 0swaps
>>>
>>> It's burning an awful lot of CPU time during this remove.
>>>
>>>> [root@siFlash test]# ls -alF | wc -l
>>>> 48
>>>
>>> So, 48 files were removed, it was basically CPU bound and one took
>>> 2.6 seconds.
>>>
>>> So, how big are the files, and does the one that took 2.6s have tens
>>> of thousands of extents ('xfs_bmap -vp *' will dump the extent maps
>>> for all the files)?
>>
>> Getting some sort of out of memory error with bmap
>>
>> [root@siFlash test]# ls -alF
>> total 50466476
>> drwxr-xr-x 2 root root 4096 May 26 21:40 ./
>> drwxr-xr-x 3 root root 17 May 26 19:32 ../
>> -rw------- 1 root root 1073741824 May 26 19:36 2.r.49.0
> ...
>
> <snip>
>
>> [root@siFlash test]# ls -alF > x
>>
>> [root@siFlash test]# xfs_bmap -vp x
>> x:
>> EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS
>> 0: [0..7]: 212681896..212681903 2 (7555752..7555759) 8 01111
>>
>> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
>> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
>
> Try filefrag -v maybe, if your e2fsprogs is new enough.
>
> Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
Ah.
f074211f xfs: fallback to vmalloc for large buffers in xfs_getbmap
fixed it in 3.4
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 2:40 ` Eric Sandeen
2012-05-27 2:43 ` Eric Sandeen
@ 2012-05-27 7:34 ` Stefan Ring
2012-05-27 13:15 ` Krzysztof Adamski
1 sibling, 1 reply; 24+ messages in thread
From: Stefan Ring @ 2012-05-27 7:34 UTC (permalink / raw)
To: Eric Sandeen; +Cc: linux-raid, xfs, Joe Landman
>> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
>> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
>
> Try filefrag -v maybe, if your e2fsprogs is new enough.
>
> Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
True, I've had it happen with extremely fragmented files.
I've started playing around with discard myself only recently, but so
far I like the approach of using fstrim a lot better than the discard
option: http://xfs.org/index.php/FITRIM/discard
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 7:34 ` Stefan Ring
@ 2012-05-27 13:15 ` Krzysztof Adamski
2012-05-27 14:59 ` joe.landman
0 siblings, 1 reply; 24+ messages in thread
From: Krzysztof Adamski @ 2012-05-27 13:15 UTC (permalink / raw)
To: Stefan Ring; +Cc: Eric Sandeen, Joe Landman, linux-raid, xfs
On Sun, 2012-05-27 at 09:34 +0200, Stefan Ring wrote:
> >> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
> >> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
> >
> > Try filefrag -v maybe, if your e2fsprogs is new enough.
> >
> > Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
>
> True, I've had it happen with extremely fragmented files.
>
> I've started playing around with discard myself only recently, but so
> far I like the approach of using fstrim a lot better than the discard
> option: http://xfs.org/index.php/FITRIM/discard
Check what blockdev --getra <md device> is set to. I used to have a
several second deletes on large (4GB) fragmented files when ra was the
default 256. Once I changed it to 4096 (best value will depend on your
setup) the deletes became instant.
K
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 13:15 ` Krzysztof Adamski
@ 2012-05-27 14:59 ` joe.landman
2012-05-27 16:07 ` Eric Sandeen
0 siblings, 1 reply; 24+ messages in thread
From: joe.landman @ 2012-05-27 14:59 UTC (permalink / raw)
To: Krzysztof Adamski; +Cc: Stefan Ring, Eric Sandeen, linux-raid, xfs@oss.sgi.com
This is going to be a very fragmented file. I am guessing that this is the reason for the long duration delete. I'll do some more measurements before going to 3.4.x as per Eric's note.
Sent from my iPad
On May 27, 2012, at 9:15 AM, Krzysztof Adamski <k@adamski.org> wrote:
> On Sun, 2012-05-27 at 09:34 +0200, Stefan Ring wrote:
>>>> [root@siFlash test]# xfs_bmap -vp 2.r.96.0
>>>> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x4 ["2.r.96.0"]: Cannot allocate memory
>>>
>>> Try filefrag -v maybe, if your e2fsprogs is new enough.
>>>
>>> Trying to remember, ENOMEM in bmap rings a bell... but this is possibly indicative of an extremely fragmented file.
>>
>> True, I've had it happen with extremely fragmented files.
>>
>> I've started playing around with discard myself only recently, but so
>> far I like the approach of using fstrim a lot better than the discard
>> option: http://xfs.org/index.php/FITRIM/discard
>
> Check what blockdev --getra <md device> is set to. I used to have a
> several second deletes on large (4GB) fragmented files when ra was the
> default 256. Once I changed it to 4096 (best value will depend on your
> setup) the deletes became instant.
>
> K
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 14:59 ` joe.landman
@ 2012-05-27 16:07 ` Eric Sandeen
2012-05-27 17:14 ` Joe Landman
2012-05-27 17:17 ` Joe Landman
0 siblings, 2 replies; 24+ messages in thread
From: Eric Sandeen @ 2012-05-27 16:07 UTC (permalink / raw)
To: joe.landman; +Cc: Krzysztof Adamski, Stefan Ring, linux-raid, xfs@oss.sgi.com
On 5/27/12 9:59 AM, joe.landman@gmail.com wrote:
> This is going to be a very fragmented file. I am guessing that this
> is the reason for the long duration delete. I'll do some more
> measurements before going to 3.4.x as per Eric's note.
filefrag -v should also tell you how many fragments, and because it
uses fiemap it probably won't run into the same problems.
But it sounds like we can just assume very high fragmentation.
It's not addressing the exact issue, but why are the files so fragmented?
Are they very hole-y or is it just an issue with how they are written?
Perhaps preallocation would help you here?
-Eric
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 16:07 ` Eric Sandeen
@ 2012-05-27 17:14 ` Joe Landman
2012-05-27 17:17 ` Joe Landman
1 sibling, 0 replies; 24+ messages in thread
From: Joe Landman @ 2012-05-27 17:14 UTC (permalink / raw)
To: Eric Sandeen; +Cc: Krzysztof Adamski, Stefan Ring, linux-raid, xfs@oss.sgi.com
On 05/27/2012 12:07 PM, Eric Sandeen wrote:
> On 5/27/12 9:59 AM, joe.landman@gmail.com wrote:
>> This is going to be a very fragmented file. I am guessing that this
>> is the reason for the long duration delete. I'll do some more
>> measurements before going to 3.4.x as per Eric's note.
>
> filefrag -v should also tell you how many fragments, and because it
> uses fiemap it probably won't run into the same problems.
>
> But it sounds like we can just assume very high fragmentation.
>
[root@siFlash test]# filefrag 1.r.48.0
1.r.48.0: 1364 extents found
> It's not addressing the exact issue, but why are the files so fragmented?
> Are they very hole-y or is it just an issue with how they are written?
> Perhaps preallocation would help you here?
Possibly. We are testing the system using fio, and doing random reads
and writes. I'll see if we can do a preallocation scheme
(before/during) for the files.
So to summarize, the delete performance will be (at least) in part a
function of the fragmentation? A directory full of massively fragmented
files will take longer to delete than a directory of contiguous and
larger extents? And I did some experimentation using xfs_repair, and it
seems to be the case there as well ... the higher level of
fragmentation, the longer the repair seems to take.
>
> -Eric
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: very slow file deletion on an SSD
2012-05-27 16:07 ` Eric Sandeen
2012-05-27 17:14 ` Joe Landman
@ 2012-05-27 17:17 ` Joe Landman
1 sibling, 0 replies; 24+ messages in thread
From: Joe Landman @ 2012-05-27 17:17 UTC (permalink / raw)
To: Eric Sandeen; +Cc: Krzysztof Adamski, Stefan Ring, linux-raid, xfs@oss.sgi.com
On 05/27/2012 12:07 PM, Eric Sandeen wrote:
> On 5/27/12 9:59 AM, joe.landman@gmail.com wrote:
>> This is going to be a very fragmented file. I am guessing that this
>> is the reason for the long duration delete. I'll do some more
>> measurements before going to 3.4.x as per Eric's note.
>
> filefrag -v should also tell you how many fragments, and because it
> uses fiemap it probably won't run into the same problems.
>
> But it sounds like we can just assume very high fragmentation.
>
> It's not addressing the exact issue, but why are the files so fragmented?
> Are they very hole-y or is it just an issue with how they are written?
> Perhaps preallocation would help you here?
... and one pass with xfs_fsr seems to have "fixed" the problem
[root@siFlash test]# xfs_fsr
xfs_fsr -m /proc/mounts -t 7200 -f /var/tmp/.fsrlast_xfs ...
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
Completed all 10 passes
[root@siFlash test]# filefrag 1.r.48.0
1.r.48.0: 1 extent found
[root@siFlash test]# rm -f 1.r.48.0
(very fast)
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2012-05-27 17:17 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-25 10:37 very slow file deletion on an SSD Joe Landman
2012-05-25 10:45 ` Bernd Schubert
2012-05-25 10:49 ` Joe Landman
2012-05-25 14:48 ` Roberto Spadim
2012-05-25 16:57 ` Ben Myers
2012-05-25 16:54 ` Joe Landman
2012-05-25 16:59 ` Christoph Hellwig
2012-05-26 16:00 ` David Brown
2012-05-26 19:56 ` Stan Hoeppner
2012-05-26 23:18 ` Dave Chinner
2012-05-26 23:25 ` Joe Landman
2012-05-27 0:07 ` Dave Chinner
2012-05-27 0:10 ` joe.landman
2012-05-27 1:49 ` Joe Landman
2012-05-27 2:40 ` Eric Sandeen
2012-05-27 2:43 ` Eric Sandeen
2012-05-27 7:34 ` Stefan Ring
2012-05-27 13:15 ` Krzysztof Adamski
2012-05-27 14:59 ` joe.landman
2012-05-27 16:07 ` Eric Sandeen
2012-05-27 17:14 ` Joe Landman
2012-05-27 17:17 ` Joe Landman
2012-05-26 23:55 ` Joe Landman
2012-05-27 0:07 ` Jon Nelson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).