* howto keep xfs directory searches fast for a long time @ 2012-08-12 9:14 Michael Monnerie 2012-08-12 19:05 ` Peter Grandi 2012-08-12 19:35 ` Stan Hoeppner 0 siblings, 2 replies; 9+ messages in thread From: Michael Monnerie @ 2012-08-12 9:14 UTC (permalink / raw) To: xfs-oss [-- Attachment #1.1: Type: text/plain, Size: 1653 bytes --] I need a VMware VM that has 8TB storage. As I can at max create a 2TB disk, I need to add 4 disks, and use lvm to concat these. All is on top of a RAID5 or RAID6 store. The workload will be storage of mostly large media files (5TB mkv Video + 1TB mp3), plus backup of normal documents (1TB .odt,.doc,.pdf etc). The server should be able to find files quickly, transfer speed is not important. There won't be many deletes to media files, mostly uploads and searching for files. Only when it grows full, old files will be removed. But normal documents will be rsynced (used as backup destination) regularly. I will set vm.vfs_cache_pressure = 10, this helps at least keeping inodes cached when they were read once. - What is the best setup to get high speed on directory searches? Find, ls, du, etc. should be quick. - Should I use inode64 or not? - If that's an 8 disk RAID-6, should I mkfs.xfs with 6*4 AGs? Or what would be a good start, or wouldn't it matter at all? And as it'll be mostly big media files, should I use sunit/swidth set to 64KB/6*64KB, does that make sense? I'm asking because I had such a VM setup once, and while it was fairly quick in the beginning, over time it felt much slower on traversing directories, very seek bound. That xfs was only 80% filled, so shouldn't have had a fragmentation problem. And I know nothing to fix that apart from backup/restore, so maybe there's something to prevent that? -- mit freundlichen Grüssen, Michael Monnerie, Ing. BSc it-management Internet Services: Protéger http://proteger.at [gesprochen: Prot-e-schee] Tel: +43 660 / 415 6531 [-- Attachment #1.2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 198 bytes --] [-- Attachment #2: Type: text/plain, Size: 121 bytes --] _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-12 9:14 howto keep xfs directory searches fast for a long time Michael Monnerie @ 2012-08-12 19:05 ` Peter Grandi 2012-08-12 19:35 ` Stan Hoeppner 1 sibling, 0 replies; 9+ messages in thread From: Peter Grandi @ 2012-08-12 19:05 UTC (permalink / raw) To: Linux fs XFS > I need a VMware VM that has 8TB storage. As I can at max > create a 2TB disk, I need to add 4 disks, and use lvm to > concat these. All is on top of a RAID5 or RAID6 store. Ah the usual goal of a single large storage pool for cheap. > The workload will be storage of mostly large media files (5TB > mkv Video + 1TB mp3), plus backup of normal documents (1TB > .odt,.doc,.pdf etc). Probably 2-3MB per MP3 thus 300-500k MP3, 1-2MB per document. For videos hard to guess how long, but picking an arbitrary number with 100MB videos it would be around 50,000 files. Overall 1 million files, still within plausibility. > The server should be able to find files quickly, transfer speed > is not important. There won't be many deletes to media files, > mostly uploads and searching for files. Only when it grows > full, old files will be removed. But normal documents will be > rsynced (used as backup destination) regularly. > I will set vm.vfs_cache_pressure = 10, this helps at least > keeping inodes cached when they were read once. That may be a workaround (see below) in your specific case to the default answer to this question: > - What is the best setup to get high speed on directory > searches? Find, ls, du, etc. should be quick. None. If you are thinking inode-accessing searches, it just won't work fast on large filetrees over long stroking. Note: in way of principle 'find' and 'ls' won't access inodes, as the could be just about names, but most uses of 'find' and 'ls' do access inode fields. 'du' obviously does, and so does 'rsync' which you intend to use for backups. Especially as most filesystems, including XFS (in at least some versions and configurations) aim to keep metadata (directories, inodes) close to file data rather than each other, because that is what typical workloads are supposed to required. Perhaps you could change the intended storage layer to favour clustering of metadata, however difficult it is to get filesystems to go against the grain of their usual design. > - Should I use inode64 or not? That's a very difficult question, as 'inode64' has two different effects: * Allows inodes to be stored in the first 1TiB (with 512B sectors) of the filetree space. * Distributes directories across AGs, and attempts to put *data* in the same AG as the directory they are linked from. http://www.spinics.net/lists/xfs/msg11429.html http://www.spinics.net/lists/xfs/msg11455.html In your case perhaps it is best not to distribute directories across AGs, and to keep all inodes in the first 1TiB. But it is a very difficult tradeoff as you may run out of space for inodes in the first 1TiB, even if you don't have that many inodes. > - If that's an 8 disk RAID-6, should I mkfs.xfs with 6*4 AGs? > Or what would be a good start, or wouldn't it matter at all? Difficult to say ahead of time. RAID6 can be a very bad choice for metadata intensive accesses, but only for updating the metadata, and it seems that there won't a lot of that in your case. > And as it'll be mostly big media files, should I use > sunit/swidth set to 64KB/6*64KB, does that make sense? Whatever the size of files, 'sw' should be the size of the RMW block of the blockdevice containing the filesystem, and 'su' should be the size of contiguous data on each member blockdevice. What is a difficult question is the best '--chunksize' for the RAID set, and that depends a lot on how multithreaded and random is the workload. > I'm asking because I had such a VM setup once, and while it > was fairly quick in the beginning, over time it felt much > slower on traversing directories, very seek bound. The definition of a database is something like "a set of data whose working set cannot be cached in memory". If you want to store a database consider using a DBMS. But perhaps your (meta)data set can be cached in memory, see below. It may be worthile to consider a few large directories as in XFS they are implemented as fairly decent trees, for random access, but large directories don't work so well for linear scans (inode enumeration issues), especially with apps that are not careful. Also depending on filesystem used and parameters, things gets slower with time the more space is used in a partition, because most filesystems tend to allocate clumpedly and starting with the low address blocks on the outer tracks, thus implicitly short striking the block device at the beginning. > That xfs was only 80% filled, so shouldn't have had a > fragmentation problem. Perhaps 80% is not enough for fragmentation of file contents, but it can be a big issue for keeping metadata together. > And I know nothing to fix that apart from backup/restore, so > maybe there's something to prevent that? No. Even backup/restore may not be good enough once the filetree block device has filled up and accesses often need long strokes. Filesystems are designed for "average" performance on "average" workloads more than peak performance on custom workloads, no matter the committment to denial of so many posters to this list. In your case you are trying to bend over a filesystem aimed at high parallel throughput over large sequential streams into doing low latency access to widely scattered small metadata... Given your requirements it might be better for you do have a filesystem that clusters all metadata together and far away from the data it described, as you 1M inodes might take all together around 1GiB of space. Or you could implement a pre-service phase where all inodes are scanned at system startup (I think it would be best with 'du'), and then to ensure that they get rarely written back to storage (which by default XFS rarely does as in effect it defaults to 'relatime'). For example on my laptop I have two filetrees with around 700,000 inodes, and with 4GiB of RAM when I 'rsync' either of them for backups further passes cause almost no disk IO, because that many inodes do get cached. These are some lines from 'slabtop' after such an 'rsync': OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 665193 665193 100% 0.94K 39129 17 626064K xfs_inode 601377 601377 100% 0.19K 28637 21 114548K dentry This is cheating, because it uses the in-memory inode and dentry caches as a DBMS, but in your case you might get away with cheating. Setting 'vm/vfs_cache_pressure=0' might even be a sensible option as the number of inodes in your situation has an upper bound which is likely to be below maximum RAM you can give to your server. Finally I am rather perplexed when a VM and SAN is used in a situation performance, and in particular where low latency disk and network access is important. VMs perform well for CPU bound loads, not so well for network loads, and even less for IO loads, and even less when latency matters more than throughput. _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-12 9:14 howto keep xfs directory searches fast for a long time Michael Monnerie 2012-08-12 19:05 ` Peter Grandi @ 2012-08-12 19:35 ` Stan Hoeppner 2012-08-13 16:44 ` Michael Monnerie 1 sibling, 1 reply; 9+ messages in thread From: Stan Hoeppner @ 2012-08-12 19:35 UTC (permalink / raw) To: Michael Monnerie; +Cc: xfs-oss On 8/12/2012 4:14 AM, Michael Monnerie wrote: > I need a VMware VM that has 8TB storage. As I can at max create a 2TB > disk, I need to add 4 disks, and use lvm to concat these. All is on top > of a RAID5 or RAID6 store. So the problem here is max vmdk size? Just use an RDM. IIRC there's no size restriction on RDMs. And, using an RDM will avoid any alignment issues that you may likely get when sticking XFS atop LVM atop a thin disk file atop VMFS atop parity RAID. With RDM you get XFS directly atop the storage LUN. This can be achieved with either an FC/iSCSI SAN LUN or with a LUN exported/exposed from a hardware RAID controller. > The workload will be storage of mostly large media files (5TB mkv Video > + 1TB mp3), plus backup of normal documents (1TB .odt,.doc,.pdf etc). > The server should be able to find files quickly, transfer speed is not > important. There won't be many deletes to media files, mostly uploads > and searching for files. Only when it grows full, old files will be > removed. But normal documents will be rsynced (used as backup > destination) regularly. > I will set vm.vfs_cache_pressure = 10, this helps at least keeping > inodes cached when they were read once. > > - What is the best setup to get high speed on directory searches? Find, > ls, du, etc. should be quick. How many directory entries are we talking about? Directory searching is seek latency sensitive, so the spindle speed of the disks and read-ahead cache of the controller may likely play as large, or larger, a role than XFS parms. > - Should I use inode64 or not? Given your mixed large media and normal "office" file rsync workloads, it's difficult to predict. I would think inode64 would slow down searching a bit due to extra seek latency accessing directory trees. This is a VM environment, thus this guest and its XFS filesystem will be competing for seeks with other VMs/workloads. So anything that decreases head seeks in XFS is a good thing. > - If that's an 8 disk RAID-6, should I mkfs.xfs with 6*4 AGs? Or what > would be a good start, or wouldn't it matter at all? Thus, I'd think the fewer AGs the better, as in as few as you can get away with, especially if most of this VM's workload is large media files. > And as it'll be mostly big media files, should I use sunit/swidth set to > 64KB/6*64KB, does that make sense? If you can use an RDM on your existing storage array, match su/sw to what's there. If you can't and must add 4 disks, simply attach them to your RAID controller, create a new RAID5 array. Given large media files, I'd probably use a strip of 256KB, times 3 spindles = 768KB stripe. But this will depend on your RAID controller. Strip size may be irrelevant to a degree with some BBWC controllers. > I'm asking because I had such a VM setup once, and while it was fairly > quick in the beginning, over time it felt much slower on traversing > directories, very seek bound. This suggests directory fragmentation. > That xfs was only 80% filled, so shouldn't > have had a fragmentation problem. And I know nothing to fix that apart > from backup/restore, so maybe there's something to prevent that? The files may not have been badly fragmented, but even at only 80% full, if the FS got over 90% full and/or saw many deletes over its lifespan, you could have had a decent amount of both directory and free space fragmentation. Depends on how it aged. -- Stan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-12 19:35 ` Stan Hoeppner @ 2012-08-13 16:44 ` Michael Monnerie 2012-08-13 21:20 ` Stan Hoeppner 2012-08-13 23:56 ` Dave Chinner 0 siblings, 2 replies; 9+ messages in thread From: Michael Monnerie @ 2012-08-13 16:44 UTC (permalink / raw) To: xfs, stan, Peter Grandi [-- Attachment #1.1: Type: text/plain, Size: 2253 bytes --] First, thanks to both of you. Am Sonntag, 12. August 2012, 14:35:27 schrieb Stan Hoeppner: > So the problem here is max vmdk size? Just use an RDM. That would have been an option before someone created the VDMK space over the full RAID ;-) > Peter Grandi: > Ah the usual goal of a single large storage pool for cheap. I don't need O_PONIES or 5.000 IOPS. I've just been trying to figure out if there's anything I can do to "optimize" a given VM and storage space via xfs formatting. This I guess is what 95% of admins worldwide have to do these days: Generic, virtualized environments with a given storage, and customer wants X. Where X is sometimes a DB, sometimes a file store, sometimes archive store. And customer expects endless IOPS, sub-zero delay, and endless disk space. I tend to destroy their ponies quickly, but that doesn't mean you can't try to keep systems quick. That particular VM is not important, but I want to keep user satisfaction at a quality level. About 10 times a week someone connects to that machine, searches a file and downloads it over the Internet. So download or read speed is of no value. But access/find times are. I guess the best I can do is run du/find every morning to pre-fill the inode caches on that VM, so when someone connects the search runs fast. The current VM shows this: # df -i /disks/big1/ Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/sp1--sha 1717934464 1255882 1716678582 1% /disks/big1 # df /disks/big1/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/sp1--sha 8587585536 6004421384 2583164152 70% /disks/big1 So 6TB data in 1.3 mio inodes. The VM caches that easily, seems that's the only real thing to optimize against. http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E CFQ seems bad, but there's no documented way out of that. I've edited that, and added a short vm.vfs_cache_pressure description. Please someone recheck. -- mit freundlichen Grüssen, Michael Monnerie, Ing. BSc it-management Internet Services: Protéger http://proteger.at [gesprochen: Prot-e-schee] Tel: +43 660 / 415 6531 [-- Attachment #1.2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 198 bytes --] [-- Attachment #2: Type: text/plain, Size: 121 bytes --] _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-13 16:44 ` Michael Monnerie @ 2012-08-13 21:20 ` Stan Hoeppner 2012-08-13 23:56 ` Dave Chinner 1 sibling, 0 replies; 9+ messages in thread From: Stan Hoeppner @ 2012-08-13 21:20 UTC (permalink / raw) To: Michael Monnerie; +Cc: xfs On 8/13/2012 11:44 AM, Michael Monnerie wrote: > http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E > > CFQ seems bad, but there's no documented way out of that. I've edited > that, and added a short vm.vfs_cache_pressure description. Please > someone recheck. The XFS FAQ isn't an appropriate place for that tuning suggestion, because it doesn't tune XFS. vm.vfs_cache_pressure tunes the page cache and is file system agnostic. Thus you should remove that entry from the FAQ. Perhaps a blog entry would be more appropriate, or a post to LKML, so it gets archived. However, this is likely already posted somewhere, as you can't be the first to run into this issue. -- Stan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-13 16:44 ` Michael Monnerie 2012-08-13 21:20 ` Stan Hoeppner @ 2012-08-13 23:56 ` Dave Chinner 2012-08-14 9:16 ` Michael Monnerie 1 sibling, 1 reply; 9+ messages in thread From: Dave Chinner @ 2012-08-13 23:56 UTC (permalink / raw) To: Michael Monnerie; +Cc: stan, xfs On Mon, Aug 13, 2012 at 06:44:34PM +0200, Michael Monnerie wrote: > I guess the best I can do is run du/find every morning to pre-fill the > inode caches on that VM, so when someone connects the search runs fast. You're doing it wrong. This is exactly what updatedb (run via cron) and locate (the updatedb search tool) are designed for. $ time locate xfs_admin ..... real 0m0.618s user 0m0.608s sys 0m0.004s $ $ time find / -name xfs_admin* .... real 0m18.045s user 0m2.936s sys 0m9.293s $ time find / -name xfs_admin* > /dev/null 2>&1 real 0m1.794s user 0m0.688s sys 0m1.068s $ time find / -name xfs_admin* > /dev/null 2>&1 real 0m1.752s user 0m0.724s sys 0m0.984s $ time find / -name xfs_admin* > /dev/null 2>&1 real 0m1.768s user 0m0.732s sys 0m0.996s locate is 3x faster than even a cached find on a filesystem with a million inodes in it and enough RAM to cache them all. And if you have limited RAM, (i.e. cold cache) it is 30x faster than running the find on a RAID0 of SSDs that can do > 90,000 random 4k read IOPS. The differences for spinning rust will be much, much greater... Use the right tool for the job.... > The current VM shows this: > > # df -i /disks/big1/ > Filesystem Inodes IUsed IFree IUse% Mounted on > /dev/mapper/sp1--sha 1717934464 1255882 1716678582 1% /disks/big1 > # df /disks/big1/ > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/sp1--sha 8587585536 6004421384 2583164152 70% /disks/big1 > > So 6TB data in 1.3 mio inodes. The VM caches that easily, seems that's > the only real thing to optimize against. > > http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E I appreciate the sentiment, but as the the person that wrote the original entry and directs the most people to it, I'm probably just going to remove this addition. The whole point of that FAQ is to say "unless you *know* you have a problem, don't touch anything". What you've added essentially says "if you *think* you might need <something>, then do...". This train of thought is *exactly* what the FAQ entry is advising *against* doing because in most cases what people -think- they need is incorrect or misguided. You've even provided a great example to emphasise the point the entry is making - you need to understand the workload before tweaking knobs. Your workload is occassional fast searches of files, which is exactly what updatedb/locate provides without kernel or filesystem tweaks... > CFQ seems bad, but there's no documented way out of that. If you know what CFQ is, then you know how to change it. If you don't know what it is, then you don't know enough to make an enlightened choice as a replacement. If you lacking knowledge of basic storage concepts and setup, then you're reading the wrong document. Google is only a browser tab away. BTW, if you want to add "how to tune XFS" entries, then create a completely new wiki page about it that first points in big red, shiny letters that the above FAQ entry should be read first, and that questions about optimising XFS for bonnie++ and other benchmarks will be directed to /dev/null. Structure it to provide information about the basics - alignment, striping, data layout, metadata layout, RAID configurations, etc and how the different XFS mkfs and mount options interact with the different storage configurations. If we are going to provide tuning guidelines on the XFS wiki, then it needs to be structured, comprehensive and correct. If you want random bits of marginally valid information abou ttuning XFS from random websites around the web, Google is only a browser tab away.... Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-13 23:56 ` Dave Chinner @ 2012-08-14 9:16 ` Michael Monnerie 2012-08-14 16:59 ` Stan Hoeppner 0 siblings, 1 reply; 9+ messages in thread From: Michael Monnerie @ 2012-08-14 9:16 UTC (permalink / raw) To: xfs; +Cc: stan [-- Attachment #1.1: Type: text/plain, Size: 601 bytes --] Am Dienstag, 14. August 2012, 09:56:23 schrieb Dave Chinner: > [locate] > Use the right tool for the job.... That tool just isn't available for people accessing the files - they are (should I say "of course"?) accessing from a box like Windows or with a Media Player, either way nothing that's anywhere near a command line. > [tuning FAQ] OK, I got the point. Sorry for the extra work I created for you. -- mit freundlichen Grüssen, Michael Monnerie, Ing. BSc it-management Internet Services: Protéger http://proteger.at [gesprochen: Prot-e-schee] Tel: +43 660 / 415 6531 [-- Attachment #1.2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 198 bytes --] [-- Attachment #2: Type: text/plain, Size: 121 bytes --] _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-14 9:16 ` Michael Monnerie @ 2012-08-14 16:59 ` Stan Hoeppner 2012-08-15 8:59 ` Michael Monnerie 0 siblings, 1 reply; 9+ messages in thread From: Stan Hoeppner @ 2012-08-14 16:59 UTC (permalink / raw) To: Michael Monnerie; +Cc: xfs On 8/14/2012 4:16 AM, Michael Monnerie wrote: > Am Dienstag, 14. August 2012, 09:56:23 schrieb Dave Chinner: >> [locate] >> Use the right tool for the job.... > > That tool just isn't available for people accessing the files - they are > (should I say "of course"?) accessing from a box like Windows or with a > Media Player, either way nothing that's anywhere near a command line. All the media players have playlist and index features. So there's little need for searching an entire Samba share is there? Maybe you need to further explain exactly how users interact with these thousands of media files from Windows/etc clients. Surely there is a freeware Linux program to index such media files to a database, and present them in a sorted web interface, or a web interface that does the 'searching'. What you apparently require is not something that can be addressed or optimized by filesystem or kernel tweaks. As Dave pointed out with the 'locate' example in a CLI, this kind of thing is precisely what databases were designed for. -- Stan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: howto keep xfs directory searches fast for a long time 2012-08-14 16:59 ` Stan Hoeppner @ 2012-08-15 8:59 ` Michael Monnerie 0 siblings, 0 replies; 9+ messages in thread From: Michael Monnerie @ 2012-08-15 8:59 UTC (permalink / raw) To: stan; +Cc: xfs [-- Attachment #1.1: Type: text/plain, Size: 1390 bytes --] Am Dienstag, 14. August 2012, 11:59:30 schrieb Stan Hoeppner: > All the media players have playlist and index features. So there's > little need for searching an entire Samba share is there? Media Players usually connect to a Samba/NFS share, and read contents. Some even search for index JPGs, and present that for a sub-dir so you have a preview. All that is directory access. > What you apparently require is not something that can be addressed or > optimized by filesystem or kernel tweaks. As Dave pointed out with > the 'locate' example in a CLI, this kind of thing is precisely what > databases were designed for. Yes, I use locate on that server of course, when I have shell access that's just fine. The "best effort" I can do now is to run the find service for locate daily morning, so inodes get pre-cached, and tune vm.vfs_cache_pressure = 10 to keep them in the cache. This works rather good. I just tried a "du -s /bigdir" again, it runs within a very short time (<4s) even 4h after the locate run, this is good enough. Customer will be happy. On the old server this took >3m, on the same hardware. A speedup of at least 45x is what I call efficient tuning ;-) -- mit freundlichen Grüssen, Michael Monnerie, Ing. BSc it-management Internet Services: Protéger http://proteger.at [gesprochen: Prot-e-schee] Tel: +43 660 / 415 6531 [-- Attachment #1.2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 198 bytes --] [-- Attachment #2: Type: text/plain, Size: 121 bytes --] _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-08-15 8:59 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-08-12 9:14 howto keep xfs directory searches fast for a long time Michael Monnerie 2012-08-12 19:05 ` Peter Grandi 2012-08-12 19:35 ` Stan Hoeppner 2012-08-13 16:44 ` Michael Monnerie 2012-08-13 21:20 ` Stan Hoeppner 2012-08-13 23:56 ` Dave Chinner 2012-08-14 9:16 ` Michael Monnerie 2012-08-14 16:59 ` Stan Hoeppner 2012-08-15 8:59 ` Michael Monnerie
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox