* Re: Why is NCQ enabled by default by libata? (2.6.20) [not found] <Pine.LNX.4.64.0703241232580.11608@p34.internal.lan> @ 2007-03-27 5:59 ` Jeff Garzik 2007-03-27 14:26 ` Mark Lord 2007-03-27 18:18 ` Mark Rustad 0 siblings, 2 replies; 19+ messages in thread From: Jeff Garzik @ 2007-03-27 5:59 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux-kernel, IDE/ATA development list Justin Piszcz wrote: > Without NCQ, performance is MUCH better on almost every operation, with > the exception of 2-3 items. Variables to take into account: * the drive (NCQ performance wildly varies) * the IO scheduler * the filesystem (if not measuring direct to blkdev) * application workload (or in your case, benchmark tool) * in particular, the threaded-ness of the apps For the overwhelming majority of combinations, NCQ should not /hurt/ performance. For the majority of combinations, NCQ helps (though it may not be often that you use more than 4-8 tags). In some cases, NCQ firmware may be broken. There is a Maxtor firmware id, and some Hitachi ids that people are leaning towards recommending be added to the libata 'horkage' list. Jeff ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 5:59 ` Why is NCQ enabled by default by libata? (2.6.20) Jeff Garzik @ 2007-03-27 14:26 ` Mark Lord 2007-03-27 18:18 ` Mark Rustad 1 sibling, 0 replies; 19+ messages in thread From: Mark Lord @ 2007-03-27 14:26 UTC (permalink / raw) To: Jeff Garzik; +Cc: Justin Piszcz, linux-kernel, IDE/ATA development list Jeff Garzik wrote: > > In some cases, NCQ firmware may be broken. There is a Maxtor firmware > id, and some Hitachi ids that people are leaning towards recommending be > added to the libata 'horkage' list. Western Digital "Raptor" drives (the 10K rpm things) are also somewhat borked in NCQ mode, depending on the application. Their firmware turns off all drive readahead during NCQ. This makes them very good for an email/news server application, but also causes them to suck for regular desktop applications. Because of this, they use special software drivers under MSwin which detect large sequential accesses, and avoid NCQ during such times. Cheers -ml ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 5:59 ` Why is NCQ enabled by default by libata? (2.6.20) Jeff Garzik 2007-03-27 14:26 ` Mark Lord @ 2007-03-27 18:18 ` Mark Rustad 2007-03-27 18:38 ` Jeff Garzik 1 sibling, 1 reply; 19+ messages in thread From: Mark Rustad @ 2007-03-27 18:18 UTC (permalink / raw) To: Jeff Garzik; +Cc: Justin Piszcz, linux-kernel, IDE/ATA development list On Mar 27, 2007, at 12:59 AM, Jeff Garzik wrote: > Justin Piszcz wrote: >> Without NCQ, performance is MUCH better on almost every operation, >> with the exception of 2-3 items. > > Variables to take into account: > > * the drive (NCQ performance wildly varies) > * the IO scheduler > * the filesystem (if not measuring direct to blkdev) > * application workload (or in your case, benchmark tool) > * in particular, the threaded-ness of the apps > > For the overwhelming majority of combinations, NCQ should not / > hurt/ performance. > > For the majority of combinations, NCQ helps (though it may not be > often that you use more than 4-8 tags). > > In some cases, NCQ firmware may be broken. There is a Maxtor > firmware id, and some Hitachi ids that people are leaning towards > recommending be added to the libata 'horkage' list. Some other variables that we have noticed: Some drive firmware goes into "stupid" mode when write cache is turned off. Meaning that it does not reorder any queued operations. Of course if you really care about your data, you don't really want to turn write cache on. Also the controller used can have unfortunate interactions. For example the Adaptec SAS controller firmware will never issue more than two queued commands to a SATA drive (even though the firmware will happily accept more from the driver), so even if an attached drive is capable of reordering queued commands, its performance is seriously crippled by not getting more commands queued up. In addition, some drive firmware seems to try to bunch up queued command completions which interacts very badly with a controller that queues up so few commands. In this case turning NCQ off performs better because the drive knows it can't hold off completions to reduce interrupt load on the host – a good idea gone totally wrong when used with the Adaptec controller. Today SATA NCQ seems to be an area where few combinations work well. It seems so bad to me that a whitelist might be better than a blacklist. That is probably overstating it, but NCQ performance is certainly a big problem. -- Mark Rustad, MRustad@gmail.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 18:18 ` Mark Rustad @ 2007-03-27 18:38 ` Jeff Garzik 2007-03-27 22:12 ` Mark Rustad 0 siblings, 1 reply; 19+ messages in thread From: Jeff Garzik @ 2007-03-27 18:38 UTC (permalink / raw) To: Mark Rustad; +Cc: Justin Piszcz, linux-kernel, IDE/ATA development list Mark Rustad wrote: > reorder any queued operations. Of course if you really care about your > data, you don't really want to turn write cache on. That's a gross exaggeration. FLUSH CACHE and FUA both ensure data integrity as well. Turning write cache off has always been a performance-killing action on ATA. > Also the controller used can have unfortunate interactions. For example > the Adaptec SAS controller firmware will never issue more than two > queued commands to a SATA drive (even though the firmware will happily > accept more from the driver), so even if an attached drive is capable of > reordering queued commands, its performance is seriously crippled by not > getting more commands queued up. In addition, some drive firmware seems > to try to bunch up queued command completions which interacts very badly > with a controller that queues up so few commands. In this case turning > NCQ off performs better because the drive knows it can't hold off > completions to reduce interrupt load on the host – a good idea gone > totally wrong when used with the Adaptec controller. All of that can be fixed with an Adaptec firmware upgrade, so not our problem here, and not a reason to disable NCQ in libata core. > Today SATA NCQ seems to be an area where few combinations work well. It > seems so bad to me that a whitelist might be better than a blacklist. > That is probably overstating it, but NCQ performance is certainly a big > problem. Real world testing disagrees with you. NCQ has been enabled for a while now. We would have screaming hordes of users if the majority of configurations were problematic. Jeff ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 18:38 ` Jeff Garzik @ 2007-03-27 22:12 ` Mark Rustad 2007-03-31 12:55 ` Ric Wheeler 0 siblings, 1 reply; 19+ messages in thread From: Mark Rustad @ 2007-03-27 22:12 UTC (permalink / raw) To: Jeff Garzik; +Cc: Justin Piszcz, linux-kernel, IDE/ATA development list On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote: > Mark Rustad wrote: >> reorder any queued operations. Of course if you really care about >> your data, you don't really want to turn write cache on. > > That's a gross exaggeration. FLUSH CACHE and FUA both ensure data > integrity as well. > > Turning write cache off has always been a performance-killing > action on ATA. Perhaps. Folks I work with would disagree with that, but I am not enough of a storage expert to judge. My statement mirrors the judgement of folks I work with that know more than I do. >> Also the controller used can have unfortunate interactions. For >> example the Adaptec SAS controller firmware will never issue more >> than two queued commands to a SATA drive (even though the firmware >> will happily accept more from the driver), so even if an attached >> drive is capable of reordering queued commands, its performance is >> seriously crippled by not getting more commands queued up. In >> addition, some drive firmware seems to try to bunch up queued >> command completions which interacts very badly with a controller >> that queues up so few commands. In this case turning NCQ off >> performs better because the drive knows it can't hold off >> completions to reduce interrupt load on the host – a good idea >> gone totally wrong when used with the Adaptec controller. > > All of that can be fixed with an Adaptec firmware upgrade, so not > our problem here, and not a reason to disable NCQ in libata core. It theoretically could be, but we are using the latest Adaptec firmware. Until there exists firmware that fixes it, it remains an issue. We worked with Adaptec to isolate this issue, but no resolution has been forthcoming from them. I agree that this does not mean that NCQ should be disabled in libata core, but some combination of controller/drive/firmware blacklist may need to be managed, as distasteful as that is. >> Today SATA NCQ seems to be an area where few combinations work >> well. It seems so bad to me that a whitelist might be better than >> a blacklist. That is probably overstating it, but NCQ performance >> is certainly a big problem. > > Real world testing disagrees with you. NCQ has been enabled for a > while now. We would have screaming hordes of users if the majority > of configurations were problematic. I didn't say that it is a majority or that it doesn't work, it just often doesn't perform. If it didn't work there would be lots of howling for sure. I'm also not saying that it is a libata problem. It seems mostly to be controller and drive firmware issues - and the odd fan issue (if you saw the thread: [BUG 2.6.21-rc3-git9] SATA NCQ failure with Samsum HD401LJ). I guess I am mainly lamenting the current state of SATA/NCQ devices and sharing what little I have picked up about it - which is that I want SAS disks in my next system! -- Mark Rustad, MRustad@gmail.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 22:12 ` Mark Rustad @ 2007-03-31 12:55 ` Ric Wheeler 0 siblings, 0 replies; 19+ messages in thread From: Ric Wheeler @ 2007-03-31 12:55 UTC (permalink / raw) To: Mark Rustad Cc: Jeff Garzik, Justin Piszcz, linux-kernel, IDE/ATA development list Mark Rustad wrote: > On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote: > >> Mark Rustad wrote: >>> reorder any queued operations. Of course if you really care about >>> your data, you don't really want to turn write cache on. >> >> That's a gross exaggeration. FLUSH CACHE and FUA both ensure data >> integrity as well. >> >> Turning write cache off has always been a performance-killing action >> on ATA. > > Perhaps. Folks I work with would disagree with that, but I am not > enough of a storage expert to judge. My statement mirrors the > judgement of folks I work with that know more than I do. You can easily demonstrate that disabling write cache on a S-ATA or ATA drive will drop your large file write performance by 50% - just try writing 10MB files to disk. ric ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20)
@ 2007-03-27 16:16 linux
2007-03-27 16:25 ` Justin Piszcz
0 siblings, 1 reply; 19+ messages in thread
From: linux @ 2007-03-27 16:16 UTC (permalink / raw)
To: htejun, jeff, jpiszcz, linux-ide, linux-kernel; +Cc: linux
Here's some more data.
6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel
Tested able to sustain reads at 60 MB/sec/drive simultaneously.
RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.
The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.
Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage. %$%@#$@#ing bonnie++ overflows and won't print file
read times; I haven't bothered to fix that yet.
NCQ seems to have a pretty significant effect on the file operations,
especially deletes.
Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled
RAID=5, NCQ
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
raid5ncq 7952M 31688 53 34760 10 25327 4 57908 86 167680 13 292.2 0
raid5ncq 7952M 30357 50 34154 10 24876 4 59692 89 165663 13 285.6 0
raid5noncq 7952M 29015 48 31627 9 24263 4 61154 91 185389 14 286.6 0
raid5noncq 7952M 28447 47 31163 9 23306 4 60456 89 198624 15 293.4 0
wcache5ncq 7952M 32433 54 35413 10 26139 4 59898 89 168032 13 303.6 0
wcache5noncq 7952M 31768 53 34597 10 25849 4 61049 90 193351 14 304.8 0
raid10ncq 7952M 54043 89 110804 32 48859 9 58809 87 142140 12 363.8 0
raid10noncq 7952M 48912 81 68428 21 38906 7 57824 87 146030 12 358.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16:100000:16/64 1351 25 +++++ +++ 941 3 2887 42 31526 96 382 1
16:100000:16/64 1400 18 +++++ +++ 386 1 4959 69 32118 95 570 2
16:100000:16/64 636 8 +++++ +++ 176 0 1649 23 +++++ +++ 245 1
16:100000:16/64 715 12 +++++ +++ 164 0 156 2 11023 32 2161 8
16:100000:16/64 1291 26 +++++ +++ 2778 10 2424 33 31127 93 483 2
16:100000:16/64 1236 26 +++++ +++ 840 3 2519 37 30366 91 445 2
16:100000:16/64 1714 37 +++++ +++ 1652 6 789 11 4700 14 12264 48
16:100000:16/64 634 11 +++++ +++ 1035 3 338 4 +++++ +++ 1349 5
raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:100000:16/64,1351,25,+++++,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:100000:16/64,1400,18,+++++,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:100000:16/64,636,8,+++++,+++,176,0,1649,23,+++++,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:100000:16/64,715,12,+++++,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:100000:16/64,1291,26,+++++,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:100000:16/64,1236,26,+++++,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:100000:16/64,1714,37,+++++,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:100000:16/64,634,11,+++++,+++,1035,3,338,4,+++++,+++,1349,5
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:16 linux @ 2007-03-27 16:25 ` Justin Piszcz 2007-03-27 16:41 ` linux 2007-03-28 14:42 ` Phillip Susi 0 siblings, 2 replies; 19+ messages in thread From: Justin Piszcz @ 2007-03-27 16:25 UTC (permalink / raw) To: linux; +Cc: htejun, jeff, linux-ide, linux-kernel On Tue, 27 Mar 2007, linux@horizon.com wrote: > Here's some more data. > > 6x ST3400832AS (Seagate 7200.8) 400 GB drives. > 3x SiI3232 PCIe SATA controllers > 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM > Linux 2.6.20.4, 64-bit kernel > > Tested able to sustain reads at 60 MB/sec/drive simultaneously. > > RAID-10 is across 6 drives, first part of drive. > RAID-5 most of the drive, so depending on allocation policies, > may be a bit slower. > > The test sequence actually was: > 1) raid5ncq > 2) raid5noncq > 3) raid10noncq > 4) raid10ncq > 5) raid5ncq > 6) raid5noncq > but I rearranged things to make it easier to compare. > > Note that NCQ makes writes faster (oh... I have write cacheing turned off; > perhaps I should turn it on and do another round), but no-NCQ seems to have > a read advantage. %$%@#$@#ing bonnie++ overflows and won't print file > read times; I haven't bothered to fix that yet. > > NCQ seems to have a pretty significant effect on the file operations, > especially deletes. > > Update: added > 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled > 8) wcache5ncq - RAID 5 with NCQ and write cache enabled > > > RAID=5, NCQ > Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > raid5ncq 7952M 31688 53 34760 10 25327 4 57908 86 167680 13 292.2 0 > raid5ncq 7952M 30357 50 34154 10 24876 4 59692 89 165663 13 285.6 0 > raid5noncq 7952M 29015 48 31627 9 24263 4 61154 91 185389 14 286.6 0 > raid5noncq 7952M 28447 47 31163 9 23306 4 60456 89 198624 15 293.4 0 > wcache5ncq 7952M 32433 54 35413 10 26139 4 59898 89 168032 13 303.6 0 > wcache5noncq 7952M 31768 53 34597 10 25849 4 61049 90 193351 14 304.8 0 > raid10ncq 7952M 54043 89 110804 32 48859 9 58809 87 142140 12 363.8 0 > raid10noncq 7952M 48912 81 68428 21 38906 7 57824 87 146030 12 358.2 0 > > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16:100000:16/64 1351 25 +++++ +++ 941 3 2887 42 31526 96 382 1 > 16:100000:16/64 1400 18 +++++ +++ 386 1 4959 69 32118 95 570 2 > 16:100000:16/64 636 8 +++++ +++ 176 0 1649 23 +++++ +++ 245 1 > 16:100000:16/64 715 12 +++++ +++ 164 0 156 2 11023 32 2161 8 > 16:100000:16/64 1291 26 +++++ +++ 2778 10 2424 33 31127 93 483 2 > 16:100000:16/64 1236 26 +++++ +++ 840 3 2519 37 30366 91 445 2 > 16:100000:16/64 1714 37 +++++ +++ 1652 6 789 11 4700 14 12264 48 > 16:100000:16/64 634 11 +++++ +++ 1035 3 338 4 +++++ +++ 1349 5 > > raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:100000:16/64,1351,25,+++++,+++,941,3,2887,42,31526,96,382,1 > raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:100000:16/64,1400,18,+++++,+++,386,1,4959,69,32118,95,570,2 > raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:100000:16/64,636,8,+++++,+++,176,0,1649,23,+++++,+++,245,1 > raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:100000:16/64,715,12,+++++,+++,164,0,156,2,11023,32,2161,8 > wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:100000:16/64,1291,26,+++++,+++,2778,10,2424,33,31127,93,483,2 > wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:100000:16/64,1236,26,+++++,+++,840,3,2519,37,30366,91,445,2 > raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:100000:16/64,1714,37,+++++,+++,1652,6,789,11,4700,14,12264,48 > raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:100000:16/64,634,11,+++++,+++,1035,3,338,4,+++++,+++,1349,5 > I would try with write-caching enabled. Also, the RAID5/RAID10 you mention seems like each volume is on part of the platter, a strange setup you got there :) Also you are disabling NCQ on/off via the /sys/block device, e.g., setting it to 1 (off) and 31 (on) during testing, yes? Justin. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:25 ` Justin Piszcz @ 2007-03-27 16:41 ` linux 2007-03-27 16:44 ` Justin Piszcz 2007-03-28 14:42 ` Phillip Susi 1 sibling, 1 reply; 19+ messages in thread From: linux @ 2007-03-27 16:41 UTC (permalink / raw) To: jpiszcz, linux; +Cc: htejun, jeff, linux-ide, linux-kernel >From jpiszcz@lucidpixels.com Tue Mar 27 16:25:58 2007 Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT) From: Justin Piszcz <jpiszcz@lucidpixels.com> X-X-Sender: jpiszcz@p34.internal.lan To: linux@horizon.com cc: htejun@gmail.com, jeff@garzik.org, linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: Why is NCQ enabled by default by libata? (2.6.20) In-Reply-To: <20070327161616.31448.qmail@science.horizon.com> References: <20070327161616.31448.qmail@science.horizon.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed On Tue, 27 Mar 2007, linux@horizon.com wrote: > Here's some more data. > > 6x ST3400832AS (Seagate 7200.8) 400 GB drives. > 3x SiI3232 PCIe SATA controllers > 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM > Linux 2.6.20.4, 64-bit kernel > > Tested able to sustain reads at 60 MB/sec/drive simultaneously. > > RAID-10 is across 6 drives, first part of drive. > RAID-5 most of the drive, so depending on allocation policies, > may be a bit slower. > > The test sequence actually was: > 1) raid5ncq > 2) raid5noncq > 3) raid10noncq > 4) raid10ncq > 5) raid5ncq > 6) raid5noncq > but I rearranged things to make it easier to compare. > > Note that NCQ makes writes faster (oh... I have write cacheing turned off; > perhaps I should turn it on and do another round), but no-NCQ seems to have > a read advantage. %$%@#$@#ing bonnie++ overflows and won't print file > read times; I haven't bothered to fix that yet. > > NCQ seems to have a pretty significant effect on the file operations, > especially deletes. > > Update: added > 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled > 8) wcache5ncq - RAID 5 with NCQ and write cache enabled > > > RAID=5, NCQ > Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > raid5ncq 7952M 31688 53 34760 10 25327 4 57908 86 167680 13 292.2 0 > raid5ncq 7952M 30357 50 34154 10 24876 4 59692 89 165663 13 285.6 0 > raid5noncq 7952M 29015 48 31627 9 24263 4 61154 91 185389 14 286.6 0 > raid5noncq 7952M 28447 47 31163 9 23306 4 60456 89 198624 15 293.4 0 > wcache5ncq 7952M 32433 54 35413 10 26139 4 59898 89 168032 13 303.6 0 > wcache5noncq 7952M 31768 53 34597 10 25849 4 61049 90 193351 14 304.8 0 > raid10ncq 7952M 54043 89 110804 32 48859 9 58809 87 142140 12 363.8 0 > raid10noncq 7952M 48912 81 68428 21 38906 7 57824 87 146030 12 358.2 0 > > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16:100000:16/64 1351 25 +++++ +++ 941 3 2887 42 31526 96 382 1 > 16:100000:16/64 1400 18 +++++ +++ 386 1 4959 69 32118 95 570 2 > 16:100000:16/64 636 8 +++++ +++ 176 0 1649 23 +++++ +++ 245 1 > 16:100000:16/64 715 12 +++++ +++ 164 0 156 2 11023 32 2161 8 > 16:100000:16/64 1291 26 +++++ +++ 2778 10 2424 33 31127 93 483 2 > 16:100000:16/64 1236 26 +++++ +++ 840 3 2519 37 30366 91 445 2 > 16:100000:16/64 1714 37 +++++ +++ 1652 6 789 11 4700 14 12264 48 > 16:100000:16/64 634 11 +++++ +++ 1035 3 338 4 +++++ +++ 1349 5 > > raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:100000:16/64,1351,25,+++++,+++,941,3,2887,42,31526,96,382,1 > raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:100000:16/64,1400,18,+++++,+++,386,1,4959,69,32118,95,570,2 > raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:100000:16/64,636,8,+++++,+++,176,0,1649,23,+++++,+++,245,1 > raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:100000:16/64,715,12,+++++,+++,164,0,156,2,11023,32,2161,8 > wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:100000:16/64,1291,26,+++++,+++,2778,10,2424,33,31127,93,483,2 > wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:100000:16/64,1236,26,+++++,+++,840,3,2519,37,30366,91,445,2 > raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:100000:16/64,1714,37,+++++,+++,1652,6,789,11,4700,14,12264,48 > raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:100000:16/64,634,11,+++++,+++,1035,3,338,4,+++++,+++,1349,5 > > I would try with write-caching enabled. I did. See the "wcache5" lines? > Also, the RAID5/RAID10 you mention seems like each volume is on part of > the platter, a strange setup you got there :) I don't quite understand. "Each volume is on part of the platter" - yes, it's called partitioning, and it's pretty common. Basically, the first 50G of each drive is assembled with RAID-10 to make a 150G "system" file system, where I appreciate the speed and greater redundancy of RAID-10, and the last 250G are combined with RAID-5 to make a 1.75 TB RAID-5 "data" file system. > Also you are disabling NCQ on/off via the /sys/block device, e.g., setting > it to 1 (off) and 31 (on) during testing, yes? Yes, it's for i in /sys/block/sd?/device/queue_depth; do echo 1 > $i ; done for i in /sys/block/sd?/device/queue_depth; do echo 31 > $i ; done ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:41 ` linux @ 2007-03-27 16:44 ` Justin Piszcz 2007-03-27 16:58 ` linux 0 siblings, 1 reply; 19+ messages in thread From: Justin Piszcz @ 2007-03-27 16:44 UTC (permalink / raw) To: linux; +Cc: htejun, jeff, linux-ide, linux-kernel On Tue, 27 Mar 2007, linux@horizon.com wrote: >> From jpiszcz@lucidpixels.com Tue Mar 27 16:25:58 2007 > Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT) > From: Justin Piszcz <jpiszcz@lucidpixels.com> > X-X-Sender: jpiszcz@p34.internal.lan > To: linux@horizon.com > cc: htejun@gmail.com, jeff@garzik.org, linux-ide@vger.kernel.org, > linux-kernel@vger.kernel.org > Subject: Re: Why is NCQ enabled by default by libata? (2.6.20) > In-Reply-To: <20070327161616.31448.qmail@science.horizon.com> > References: <20070327161616.31448.qmail@science.horizon.com> > MIME-Version: 1.0 > Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed > > On Tue, 27 Mar 2007, linux@horizon.com wrote: > >> Here's some more data. >> >> 6x ST3400832AS (Seagate 7200.8) 400 GB drives. >> 3x SiI3232 PCIe SATA controllers >> 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM >> Linux 2.6.20.4, 64-bit kernel >> >> Tested able to sustain reads at 60 MB/sec/drive simultaneously. >> >> RAID-10 is across 6 drives, first part of drive. >> RAID-5 most of the drive, so depending on allocation policies, >> may be a bit slower. >> >> The test sequence actually was: >> 1) raid5ncq >> 2) raid5noncq >> 3) raid10noncq >> 4) raid10ncq >> 5) raid5ncq >> 6) raid5noncq >> but I rearranged things to make it easier to compare. >> >> Note that NCQ makes writes faster (oh... I have write cacheing turned off; >> perhaps I should turn it on and do another round), but no-NCQ seems to have >> a read advantage. %$%@#$@#ing bonnie++ overflows and won't print file >> read times; I haven't bothered to fix that yet. >> >> NCQ seems to have a pretty significant effect on the file operations, >> especially deletes. >> >> Update: added >> 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled >> 8) wcache5ncq - RAID 5 with NCQ and write cache enabled >> >> >> RAID=5, NCQ >> Version 1.03 ------Sequential Output------ --Sequential Input- --Random- >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >> raid5ncq 7952M 31688 53 34760 10 25327 4 57908 86 167680 13 292.2 0 >> raid5ncq 7952M 30357 50 34154 10 24876 4 59692 89 165663 13 285.6 0 >> raid5noncq 7952M 29015 48 31627 9 24263 4 61154 91 185389 14 286.6 0 >> raid5noncq 7952M 28447 47 31163 9 23306 4 60456 89 198624 15 293.4 0 >> wcache5ncq 7952M 32433 54 35413 10 26139 4 59898 89 168032 13 303.6 0 >> wcache5noncq 7952M 31768 53 34597 10 25849 4 61049 90 193351 14 304.8 0 >> raid10ncq 7952M 54043 89 110804 32 48859 9 58809 87 142140 12 363.8 0 >> raid10noncq 7952M 48912 81 68428 21 38906 7 57824 87 146030 12 358.2 0 >> >> ------Sequential Create------ --------Random Create-------- >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- >> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP >> 16:100000:16/64 1351 25 +++++ +++ 941 3 2887 42 31526 96 382 1 >> 16:100000:16/64 1400 18 +++++ +++ 386 1 4959 69 32118 95 570 2 >> 16:100000:16/64 636 8 +++++ +++ 176 0 1649 23 +++++ +++ 245 1 >> 16:100000:16/64 715 12 +++++ +++ 164 0 156 2 11023 32 2161 8 >> 16:100000:16/64 1291 26 +++++ +++ 2778 10 2424 33 31127 93 483 2 >> 16:100000:16/64 1236 26 +++++ +++ 840 3 2519 37 30366 91 445 2 >> 16:100000:16/64 1714 37 +++++ +++ 1652 6 789 11 4700 14 12264 48 >> 16:100000:16/64 634 11 +++++ +++ 1035 3 338 4 +++++ +++ 1349 5 >> >> raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:100000:16/64,1351,25,+++++,+++,941,3,2887,42,31526,96,382,1 >> raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:100000:16/64,1400,18,+++++,+++,386,1,4959,69,32118,95,570,2 >> raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:100000:16/64,636,8,+++++,+++,176,0,1649,23,+++++,+++,245,1 >> raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:100000:16/64,715,12,+++++,+++,164,0,156,2,11023,32,2161,8 >> wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:100000:16/64,1291,26,+++++,+++,2778,10,2424,33,31127,93,483,2 >> wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:100000:16/64,1236,26,+++++,+++,840,3,2519,37,30366,91,445,2 >> raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:100000:16/64,1714,37,+++++,+++,1652,6,789,11,4700,14,12264,48 >> raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:100000:16/64,634,11,+++++,+++,1035,3,338,4,+++++,+++,1349,5 >> > >> I would try with write-caching enabled. > > I did. See the "wcache5" lines? > >> Also, the RAID5/RAID10 you mention seems like each volume is on part of >> the platter, a strange setup you got there :) > > I don't quite understand. "Each volume is on part of the platter" - > yes, it's called partitioning, and it's pretty common. > > Basically, the first 50G of each drive is assembled with RAID-10 to make > a 150G "system" file system, where I appreciate the speed and greater > redundancy of RAID-10, and the last 250G are combined with RAID-5 to make > a 1.75 TB RAID-5 "data" file system. > >> Also you are disabling NCQ on/off via the /sys/block device, e.g., setting >> it to 1 (off) and 31 (on) during testing, yes? > > Yes, it's > for i in /sys/block/sd?/device/queue_depth; do echo 1 > $i ; done > for i in /sys/block/sd?/device/queue_depth; do echo 31 > $i ; done > I meant you do not allocate the entire disk per raidset, which may alter performance numbers. 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) I assume you mean 3132 right? I also have 6 seagates, I'd need to run one of these tests on them as well, also you took the micro jumper off the Seagate 400s in the back as well right? Justin. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:44 ` Justin Piszcz @ 2007-03-27 16:58 ` linux 2007-03-27 17:03 ` Justin Piszcz 0 siblings, 1 reply; 19+ messages in thread From: linux @ 2007-03-27 16:58 UTC (permalink / raw) To: jpiszcz, linux; +Cc: htejun, jeff, linux-ide, linux-kernel > I meant you do not allocate the entire disk per raidset, which may alter > performance numbers. No, that would be silly. It does lower the average performance of the large RAID-5 area, but I don't know how ext3fs is allocating the blocks anyway, so > 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) > I assume you mean 3132 right? Yes; did I mistype? 02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) 03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) 04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) > I also have 6 seagates, I'd need to run one > of these tests on them as well, also you took the micro jumper off the > Seagate 400s in the back as well right? Um... no, I don't remember doing anything like that. What micro jumper? It's been a while, but I just double-checked the drive manual and it doesn't mention any jumpers. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:58 ` linux @ 2007-03-27 17:03 ` Justin Piszcz 0 siblings, 0 replies; 19+ messages in thread From: Justin Piszcz @ 2007-03-27 17:03 UTC (permalink / raw) To: linux; +Cc: htejun, jeff, linux-ide, linux-kernel On Tue, 27 Mar 2007, linux@horizon.com wrote: >> I meant you do not allocate the entire disk per raidset, which may alter >> performance numbers. > > No, that would be silly. It does lower the average performance of the > large RAID-5 area, but I don't know how ext3fs is allocating the blocks > anyway, so > >> 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) >> I assume you mean 3132 right? > > Yes; did I mistype? > > 02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) > 03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) > 04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) > >> I also have 6 seagates, I'd need to run one >> of these tests on them as well, also you took the micro jumper off the >> Seagate 400s in the back as well right? > > Um... no, I don't remember doing anything like that. What micro jumper? > It's been a while, but I just double-checked the drive manual and > it doesn't mention any jumpers. > The 7200.8's don't use a jumper except for "factory use" - the 7200.9s and 10s I believe have a jumper in the back to enable/disable 3.0GBps operation. Your model # corresponds with a 7200.8, so nevermind about the jumper. Justin. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-27 16:25 ` Justin Piszcz 2007-03-27 16:41 ` linux @ 2007-03-28 14:42 ` Phillip Susi 2007-03-28 14:48 ` Jeff Garzik 1 sibling, 1 reply; 19+ messages in thread From: Phillip Susi @ 2007-03-28 14:42 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux, htejun, jeff, linux-ide, linux-kernel Justin Piszcz wrote: > I would try with write-caching enabled. > Also, the RAID5/RAID10 you mention seems like each volume is on part of > the platter, a strange setup you got there :) Shouldn't NCQ only help write performance if write caching is _disabled_? Since write cache essentially is just non tagged command queuing? ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-28 14:42 ` Phillip Susi @ 2007-03-28 14:48 ` Jeff Garzik 2007-03-28 15:22 ` Andy Warner 2007-03-29 17:28 ` Phillip Susi 0 siblings, 2 replies; 19+ messages in thread From: Jeff Garzik @ 2007-03-28 14:48 UTC (permalink / raw) To: Phillip Susi; +Cc: Justin Piszcz, linux, htejun, linux-ide, linux-kernel Phillip Susi wrote: > Justin Piszcz wrote: >> I would try with write-caching enabled. >> Also, the RAID5/RAID10 you mention seems like each volume is on part of >> the platter, a strange setup you got there :) > > Shouldn't NCQ only help write performance if write caching is > _disabled_? Since write cache essentially is just non tagged command > queuing? NCQ provides for a more asynchronous flow. It helps greatly with reads (of which most are, by nature, synchronous at the app level) from multiple threads or apps. It helps with writes, even with write cache on, by allowing multiple commands to be submitted and/or retired at the same time. Jeff ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-28 14:48 ` Jeff Garzik @ 2007-03-28 15:22 ` Andy Warner 2007-03-29 17:28 ` Phillip Susi 1 sibling, 0 replies; 19+ messages in thread From: Andy Warner @ 2007-03-28 15:22 UTC (permalink / raw) To: linux-ide On 3/28/07, Jeff Garzik <jeff@garzik.org> wrote: > [...] > NCQ provides for a more asynchronous flow. It helps greatly with reads > (of which most are, by nature, synchronous at the app level) from > multiple threads or apps. It helps with writes, even with write cache > on, by allowing multiple commands to be submitted and/or retired at the > same time. Since people are looking under this rock, more black/not-whitelist fodder: Seagate ST3500641 (500G NL35.2 SATA) with firmware revisions earlier than 3.AEQ and ST3750640 (750G Barracuda ES SATA) with firmware ealier than 3.AEE exhibit pathological NCQ write behaviour at queue depths <= 2 when write cache is disabled. Writes will take 3 seconds to complete. That's not a typo, I really do mean 0.3 IOPS. As long as you can keep the write queue depth above 2, you'll be OK. For the record, I'm not slamming SATA or NCQ, just pointing out something to be aware of. -- Andy ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-28 14:48 ` Jeff Garzik 2007-03-28 15:22 ` Andy Warner @ 2007-03-29 17:28 ` Phillip Susi 2007-03-29 18:40 ` linux ` (2 more replies) 1 sibling, 3 replies; 19+ messages in thread From: Phillip Susi @ 2007-03-29 17:28 UTC (permalink / raw) To: Jeff Garzik; +Cc: Justin Piszcz, linux, htejun, linux-ide, linux-kernel Jeff Garzik wrote: > NCQ provides for a more asynchronous flow. It helps greatly with reads > (of which most are, by nature, synchronous at the app level) from > multiple threads or apps. It helps with writes, even with write cache > on, by allowing multiple commands to be submitted and/or retired at the > same time. But when writing, what is the difference between queuing multiple tagged writes, and sending down multiple untagged cached writes that complete immediately and actually hit the disk later? Either way the host keeps sending writes to the disk until it's buffers are full, and the disk is constantly trying to commit those buffers to the media in the most optimal order. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-29 17:28 ` Phillip Susi @ 2007-03-29 18:40 ` linux 2007-03-29 18:51 ` Jeff Garzik 2007-03-29 21:35 ` Alan Cox 2 siblings, 0 replies; 19+ messages in thread From: linux @ 2007-03-29 18:40 UTC (permalink / raw) To: jeff, psusi; +Cc: htejun, jpiszcz, linux-ide, linux-kernel, linux > But when writing, what is the difference between queuing multiple tagged > writes, and sending down multiple untagged cached writes that complete > immediately and actually hit the disk later? Either way the host keeps > sending writes to the disk until it's buffers are full, and the disk is > constantly trying to commit those buffers to the media in the most > optimal order. Well, theoretically it allows more buffering, without hurting read cacheing. With NCQ, the drive gets the command, and then tells the host when it wants the corresponding data. It can ask for the data in any order it likes, when it's decided which write will be serviced next. So it doesn's have to fill up its RAM with the write data. This leaves more RAM free for things like read-ahead. Another trick, that I know SCSI can do and I expect NCQ can do, is that the drive cam ask for the data for a single write in different orders. This is particularly useful for reads, where a drive asked for blocks 100-199 can deliver blocks 150-199 first, then 100-149 when the drive spins around. This is, unfortunately, kind of theoretical. I don't actually know how hard drive cacheing algorithms work, but I assume it's mostly a readahead cache. The host has much more RAM than the drive, so any block that it's read won't be requested again for a long time. So the drive doesn't want to keep that in cache. But any sectors that the drive happens to read nearby requested sectors are worth keeping. I'm not sure it's a big deal, as 32 (tags) x 128K (largest LBA28 write size) is 4M, only half of a typical drive's cache RAM. But it's possible that there's some difference. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-29 17:28 ` Phillip Susi 2007-03-29 18:40 ` linux @ 2007-03-29 18:51 ` Jeff Garzik 2007-03-29 21:35 ` Alan Cox 2 siblings, 0 replies; 19+ messages in thread From: Jeff Garzik @ 2007-03-29 18:51 UTC (permalink / raw) To: Phillip Susi; +Cc: Justin Piszcz, linux, htejun, linux-ide, linux-kernel Phillip Susi wrote: > Jeff Garzik wrote: >> NCQ provides for a more asynchronous flow. It helps greatly with >> reads (of which most are, by nature, synchronous at the app level) >> from multiple threads or apps. It helps with writes, even with write >> cache on, by allowing multiple commands to be submitted and/or retired >> at the same time. > > But when writing, what is the difference between queuing multiple tagged > writes, and sending down multiple untagged cached writes that complete > immediately and actually hit the disk later? Either way the host keeps > sending writes to the disk until it's buffers are full, and the disk is > constantly trying to commit those buffers to the media in the most > optimal order. Less overhead to starting commands, and all the other benefits of making operations fully async. Jeff ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Why is NCQ enabled by default by libata? (2.6.20) 2007-03-29 17:28 ` Phillip Susi 2007-03-29 18:40 ` linux 2007-03-29 18:51 ` Jeff Garzik @ 2007-03-29 21:35 ` Alan Cox 2 siblings, 0 replies; 19+ messages in thread From: Alan Cox @ 2007-03-29 21:35 UTC (permalink / raw) To: Phillip Susi Cc: Jeff Garzik, Justin Piszcz, linux, htejun, linux-ide, linux-kernel O> writes, and sending down multiple untagged cached writes that complete > immediately and actually hit the disk later? Either way the host keeps > sending writes to the disk until it's buffers are full, and the disk is > constantly trying to commit those buffers to the media in the most > optimal order. On the controller side primarily you get to queue commands which means you don't have a dead time period between the completion interrupt and the next command being issued. Those times add up even when there is a disk cache buffering the output ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2007-03-31 12:56 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <Pine.LNX.4.64.0703241232580.11608@p34.internal.lan>
2007-03-27 5:59 ` Why is NCQ enabled by default by libata? (2.6.20) Jeff Garzik
2007-03-27 14:26 ` Mark Lord
2007-03-27 18:18 ` Mark Rustad
2007-03-27 18:38 ` Jeff Garzik
2007-03-27 22:12 ` Mark Rustad
2007-03-31 12:55 ` Ric Wheeler
2007-03-27 16:16 linux
2007-03-27 16:25 ` Justin Piszcz
2007-03-27 16:41 ` linux
2007-03-27 16:44 ` Justin Piszcz
2007-03-27 16:58 ` linux
2007-03-27 17:03 ` Justin Piszcz
2007-03-28 14:42 ` Phillip Susi
2007-03-28 14:48 ` Jeff Garzik
2007-03-28 15:22 ` Andy Warner
2007-03-29 17:28 ` Phillip Susi
2007-03-29 18:40 ` linux
2007-03-29 18:51 ` Jeff Garzik
2007-03-29 21:35 ` Alan Cox
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).