* 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
@ 2004-02-01 21:34 Philip Martin
2004-02-01 23:11 ` Andrew Morton
2004-02-03 3:46 ` Andrew Morton
0 siblings, 2 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-01 21:34 UTC (permalink / raw)
To: linux-kernel
The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
ReiserFS. It's a few years old and has always run Linux, most
recently 2.4.24. I decided to try 2.6.1 and the performance is
disappointing.
My test is a software build of about 200 source files (written in C)
that I usually build using "nice make -j4". Timing the build on
2.4.24 I typically get something like
242.27user 81.06system 2:44.18elapsed 196%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (1742270major+1942279minor)pagefaults 0swaps
and on 2.6.1 I get
244.08user 116.33system 3:27.40elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3763670minor)pagefaults 0swaps
The results are repeatable. The user CPU is about the same for the
two kernels, but on 2.6.1 the elapsed time is much greater, as is the
system CPU. I note a big difference in the pagefaults between 2.4 and
2.6 but I don't know what to make of it.
Comparing /proc/scsi/aic7xxx/0 before and after the build I see
another difference, the "Commands Queued" to the RAID disks are much
greater for 2.6 than 2.4
disk0 disk2
2.4
before: 8459 4766
after: 13798 7351
2.6
before: 21287 8555
after: 40491 15995
(The root partition is also on disk0 and that's not part of the RAID
array; I guess that's why disk0 has higher numbers than disk2.)
The machine has another disk that is not part of the RAID array, it's
a slower disk but I think the build is CPU bound anyway. I put an
ext2 filesystem on this extra disk, and then used that for my trial
build with the rest of the system, gcc, as, ld, etc. still coming from
RAID array. On 2.4 the time for the ext2 build is essentially the
same as for the RAID build, the difference is within the normal
variation between builds. On 2.6 the ext2 build takes
244.43user 111.75system 3:16.42elapsed 181%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3757151minor)pagefaults 0swaps
Although the CPU used is about the same as the RAID build the elapsed
time is less, so there is some improvement but it is still worse than
2.4.24.
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-01 21:34 Philip Martin
@ 2004-02-01 23:11 ` Andrew Morton
2004-02-01 23:42 ` Philip Martin
2004-02-01 23:52 ` Nick Piggin
2004-02-03 3:46 ` Andrew Morton
1 sibling, 2 replies; 32+ messages in thread
From: Andrew Morton @ 2004-02-01 23:11 UTC (permalink / raw)
To: Philip Martin; +Cc: linux-kernel
Philip Martin <philip@codematters.co.uk> wrote:
>
> The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
> ReiserFS. It's a few years old and has always run Linux, most
> recently 2.4.24. I decided to try 2.6.1 and the performance is
> disappointing.
2.6 has a few performance problems under heavy pageout at present. Nick
Piggin has some patches which largely fix it up.
> My test is a software build of about 200 source files (written in C)
> that I usually build using "nice make -j4".
Tried -j3?
> Timing the build on
> 2.4.24 I typically get something like
>
> 242.27user 81.06system 2:44.18elapsed 196%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (1742270major+1942279minor)pagefaults 0swaps
>
> and on 2.6.1 I get
>
> 244.08user 116.33system 3:27.40elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (0major+3763670minor)pagefaults 0swaps
hm, the major fault accounting is wrong.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-01 23:11 ` Andrew Morton
@ 2004-02-01 23:42 ` Philip Martin
2004-02-01 23:52 ` Nick Piggin
1 sibling, 0 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-01 23:42 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
Andrew Morton <akpm@osdl.org> writes:
> Philip Martin <philip@codematters.co.uk> wrote:
>>
>> My test is a software build of about 200 source files (written in C)
>> that I usually build using "nice make -j4".
>
> Tried -j3?
I've tried -j2 and -j3, the results are much the same as -j4.
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-01 23:11 ` Andrew Morton
2004-02-01 23:42 ` Philip Martin
@ 2004-02-01 23:52 ` Nick Piggin
2004-02-02 0:51 ` Philip Martin
1 sibling, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-01 23:52 UTC (permalink / raw)
To: Andrew Morton; +Cc: Philip Martin, linux-kernel
Andrew Morton wrote:
>Philip Martin <philip@codematters.co.uk> wrote:
>
>>The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
>> ReiserFS. It's a few years old and has always run Linux, most
>> recently 2.4.24. I decided to try 2.6.1 and the performance is
>> disappointing.
>>
>
>2.6 has a few performance problems under heavy pageout at present. Nick
>Piggin has some patches which largely fix it up.
>
>
>> My test is a software build of about 200 source files (written in C)
>> that I usually build using "nice make -j4".
>>
>
>Tried -j3?
>
>
Its got 512MB RAM though so its not swapping, is it?
Philip, can you please send about 30 seconds of vmstat 1
output for 2.4 and 2.6 while the test is running. Thanks
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-01 23:52 ` Nick Piggin
@ 2004-02-02 0:51 ` Philip Martin
2004-02-02 5:15 ` Nick Piggin
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-02 0:51 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> Its got 512MB RAM though so its not swapping, is it?
No, it's not swapping.
> Philip, can you please send about 30 seconds of vmstat 1
> output for 2.4 and 2.6 while the test is running. Thanks
OK. I rebooted, logged in, shutdown the network, ran find to fill the
memory, then did make clean, make -j4, make clean, make -j4. The
vmstat numbers are for the middle of the second make -j4. I'm using
Debian's procps 3.1.15-1.
2.4.24
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 2 0 13848 95012 304080 0 0 0 976 263 811 84 16 0 0
4 0 0 14276 95584 304328 0 0 0 2092 290 765 83 17 0 0
7 0 0 20808 95584 303924 0 0 0 0 110 722 79 21 0 0
5 0 0 15064 95584 303972 0 0 0 0 102 773 77 23 0 0
5 0 0 19516 95584 304520 0 0 0 0 102 422 89 11 0 0
4 0 0 20560 95584 304212 0 0 0 0 102 1044 63 37 0 0
4 1 0 17092 95880 304504 0 0 0 584 119 448 88 12 0 0
6 0 0 22740 96028 304448 0 0 0 1020 234 1005 74 26 0 0
5 0 0 10672 96028 304472 0 0 0 0 102 685 78 22 0 0
4 0 0 22124 96028 305068 0 0 0 0 102 557 85 15 0 0
4 0 0 16696 96028 304712 0 0 0 0 102 1048 67 33 0 0
5 0 0 21732 96028 305436 0 0 0 0 102 270 90 10 0 0
4 1 0 21056 96356 304960 0 0 0 644 178 1346 47 52 1 0
4 0 0 8916 96676 305196 0 0 0 1520 263 325 90 6 4 0
5 0 0 19404 96676 305924 0 0 0 0 102 505 86 14 0 0
5 0 0 16624 96676 305260 0 0 0 0 102 1081 65 35 0 0
3 0 0 8732 96676 305380 0 0 0 0 102 280 91 9 0 0
4 0 0 14080 96676 305556 0 0 0 0 102 747 76 24 0 0
5 1 0 14948 97016 305788 0 0 0 668 178 542 79 18 3 0
4 0 0 13820 97124 305732 0 0 0 1020 188 1028 67 33 0 0
5 0 0 16344 97128 306208 0 0 0 0 102 433 87 13 0 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 1 0 11028 97216 306388 0 0 432 84 174 1091 68 32 0 0
6 0 0 15816 97268 306748 0 0 36 56 114 556 81 19 0 0
5 0 0 13524 97268 306636 0 0 0 0 102 983 66 34 0 0
4 1 0 16996 97732 306828 0 0 0 952 226 472 84 16 0 0
4 0 0 14232 97752 306880 0 0 0 960 159 1194 64 36 0 0
5 0 0 15704 97752 307216 0 0 0 0 102 370 84 16 0 0
5 0 0 15548 97752 307120 0 0 0 0 102 1166 66 34 0 0
4 0 0 7284 97752 307224 0 0 0 0 102 324 91 9 0 0
7 0 0 11872 97752 307396 0 0 0 0 102 563 85 15 0 0
4 1 0 12860 98388 307940 0 0 0 1504 290 815 77 23 0 0
4 0 0 7532 98628 307580 0 0 0 1324 223 846 79 21 0 0
4 0 0 11536 98628 305912 0 0 0 0 102 374 89 11 0 0
6 0 0 12508 98628 305760 0 0 0 0 102 825 78 22 0 0
5 0 0 12700 98628 306060 0 0 0 0 102 459 87 13 0 0
4 0 0 11972 98628 306020 0 0 0 0 102 789 74 26 0 0
4 1 0 14388 98924 306120 0 0 0 584 166 690 80 20 0 0
2 3 0 9956 99528 305528 0 0 0 1344 287 788 77 23 0 0
7 0 0 14744 99608 305256 0 0 0 976 154 842 75 24 1 0
4 0 0 4988 99608 303244 0 0 0 0 102 460 86 14 0 0
4 0 0 20264 99608 303664 0 0 0 0 102 917 75 25 0 0
4 0 0 12940 99608 303544 0 0 0 0 102 645 80 20 0 0
2.6.1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
5 0 0 25528 241032 44500 0 0 0 0 1020 1315 63 37 0 0
4 0 0 28528 241032 44840 0 0 0 0 1002 533 87 14 0 0
4 0 0 34352 241032 44908 0 0 0 0 1003 804 81 19 0 0
4 0 0 25064 241032 44908 0 0 0 0 1003 1312 66 34 0 0
1 4 0 26024 241032 44976 0 0 0 92 1007 685 84 15 0 1
4 2 0 18152 241032 45248 0 0 0 1364 1186 800 79 19 0 2
6 2 0 29288 241092 45392 0 0 0 1088 1158 769 86 14 0 0
5 1 0 31208 241200 45352 0 0 0 928 1138 1702 43 40 2 15
4 1 0 26728 241200 45488 0 0 0 1388 1182 1148 63 29 0 9
4 1 0 23784 241236 45520 0 0 0 1092 1158 823 82 15 0 2
8 1 0 30568 241296 45664 0 0 0 988 1145 1561 58 33 1 9
4 3 0 28008 241316 45780 0 0 0 1140 1164 1543 55 36 1 9
4 1 0 26280 241336 45964 0 0 0 1360 1185 680 72 14 0 13
6 1 0 32744 241416 45884 0 0 0 896 1136 1061 72 21 2 7
4 0 0 24872 241416 45884 0 0 0 1548 1064 1459 57 38 2 4
4 0 0 27176 241416 46156 0 0 0 0 1002 905 78 22 0 0
6 0 0 31784 241416 46224 0 0 0 0 1002 1423 63 38 0 0
4 0 0 24360 241416 46428 0 0 0 0 1003 735 81 19 0 0
5 0 0 29032 241416 46428 0 0 0 0 1003 1083 73 27 0 0
1 4 0 25640 241416 46428 0 0 0 1128 1126 1344 62 37 0 2
4 1 0 21480 241416 46496 0 0 0 864 1140 822 78 17 1 4
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
6 2 0 27304 241456 46660 0 0 0 1044 1152 898 80 17 0 2
3 2 0 28392 241476 46776 0 0 0 1120 1129 1569 55 37 3 6
4 1 0 26920 241476 46980 0 0 0 1016 1179 1050 75 22 0 3
5 2 0 25576 241512 48372 0 0 1176 884 1130 767 81 15 0 3
2 4 0 26920 241616 50716 0 0 2308 696 1144 1063 66 25 0 8
2 3 0 20456 241628 50500 0 0 0 1116 1154 1488 58 35 1 6
5 2 0 24616 241648 50888 0 0 0 1256 1181 840 80 19 0 0
4 0 0 25704 241668 50936 0 0 0 2108 1168 1562 62 34 1 4
5 0 0 20392 241672 50864 0 0 0 0 1030 673 81 19 0 0
4 0 0 22184 241672 51204 0 0 0 0 1003 983 75 25 0 0
7 0 0 22248 241672 51136 0 0 0 0 1003 1493 60 40 0 0
5 0 0 22632 241672 51204 0 0 0 0 1003 1001 73 27 0 0
4 2 0 22632 241672 51476 0 0 0 576 1040 1232 67 34 0 0
4 2 0 19872 241672 51340 0 0 0 1272 1186 1143 72 28 0 0
4 1 0 22112 241672 51476 0 0 0 988 1160 1206 67 27 0 6
3 3 0 21408 241692 51524 0 0 0 1364 1182 1563 53 40 1 7
8 1 0 21024 241692 51728 0 0 0 1336 1164 1043 74 24 0 2
5 1 0 19296 241712 51776 0 0 0 1080 1170 1331 65 29 1 5
2 1 0 16224 241728 51760 0 0 0 1140 1177 1036 78 22 0 0
4 1 0 20704 241728 52168 0 0 0 1076 1159 798 83 16 1 0
5 1 0 20320 241748 52080 0 0 0 1440 1160 1469 63 36 0 1
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 0:51 ` Philip Martin
@ 2004-02-02 5:15 ` Nick Piggin
2004-02-02 8:58 ` Nick Piggin
2004-02-02 18:08 ` Philip Martin
0 siblings, 2 replies; 32+ messages in thread
From: Nick Piggin @ 2004-02-02 5:15 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Philip Martin wrote:
>Nick Piggin <piggin@cyberone.com.au> writes:
>
>
>>Its got 512MB RAM though so its not swapping, is it?
>>
>
>No, it's not swapping.
>
>
>>Philip, can you please send about 30 seconds of vmstat 1
>>output for 2.4 and 2.6 while the test is running. Thanks
>>
>
>OK. I rebooted, logged in, shutdown the network, ran find to fill the
>memory, then did make clean, make -j4, make clean, make -j4. The
>vmstat numbers are for the middle of the second make -j4. I'm using
>Debian's procps 3.1.15-1.
>
>
>2.4.24
>
>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 2 2 0 13848 95012 304080 0 0 0 976 263 811 84 16 0 0
> 4 0 0 14276 95584 304328 0 0 0 2092 290 765 83 17 0 0
> 7 0 0 20808 95584 303924 0 0 0 0 110 722 79 21 0 0
> 5 0 0 15064 95584 303972 0 0 0 0 102 773 77 23 0 0
> 5 0 0 19516 95584 304520 0 0 0 0 102 422 89 11 0 0
> 4 0 0 20560 95584 304212 0 0 0 0 102 1044 63 37 0 0
> 4 1 0 17092 95880 304504 0 0 0 584 119 448 88 12 0 0
> 6 0 0 22740 96028 304448 0 0 0 1020 234 1005 74 26 0 0
> 5 0 0 10672 96028 304472 0 0 0 0 102 685 78 22 0 0
> 4 0 0 22124 96028 305068 0 0 0 0 102 557 85 15 0 0
> 4 0 0 16696 96028 304712 0 0 0 0 102 1048 67 33 0 0
> 5 0 0 21732 96028 305436 0 0 0 0 102 270 90 10 0 0
> 4 1 0 21056 96356 304960 0 0 0 644 178 1346 47 52 1 0
> 4 0 0 8916 96676 305196 0 0 0 1520 263 325 90 6 4 0
> 5 0 0 19404 96676 305924 0 0 0 0 102 505 86 14 0 0
> 5 0 0 16624 96676 305260 0 0 0 0 102 1081 65 35 0 0
> 3 0 0 8732 96676 305380 0 0 0 0 102 280 91 9 0 0
> 4 0 0 14080 96676 305556 0 0 0 0 102 747 76 24 0 0
> 5 1 0 14948 97016 305788 0 0 0 668 178 542 79 18 3 0
> 4 0 0 13820 97124 305732 0 0 0 1020 188 1028 67 33 0 0
> 5 0 0 16344 97128 306208 0 0 0 0 102 433 87 13 0 0
>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 4 1 0 11028 97216 306388 0 0 432 84 174 1091 68 32 0 0
> 6 0 0 15816 97268 306748 0 0 36 56 114 556 81 19 0 0
> 5 0 0 13524 97268 306636 0 0 0 0 102 983 66 34 0 0
> 4 1 0 16996 97732 306828 0 0 0 952 226 472 84 16 0 0
> 4 0 0 14232 97752 306880 0 0 0 960 159 1194 64 36 0 0
> 5 0 0 15704 97752 307216 0 0 0 0 102 370 84 16 0 0
> 5 0 0 15548 97752 307120 0 0 0 0 102 1166 66 34 0 0
> 4 0 0 7284 97752 307224 0 0 0 0 102 324 91 9 0 0
> 7 0 0 11872 97752 307396 0 0 0 0 102 563 85 15 0 0
> 4 1 0 12860 98388 307940 0 0 0 1504 290 815 77 23 0 0
> 4 0 0 7532 98628 307580 0 0 0 1324 223 846 79 21 0 0
> 4 0 0 11536 98628 305912 0 0 0 0 102 374 89 11 0 0
> 6 0 0 12508 98628 305760 0 0 0 0 102 825 78 22 0 0
> 5 0 0 12700 98628 306060 0 0 0 0 102 459 87 13 0 0
> 4 0 0 11972 98628 306020 0 0 0 0 102 789 74 26 0 0
> 4 1 0 14388 98924 306120 0 0 0 584 166 690 80 20 0 0
> 2 3 0 9956 99528 305528 0 0 0 1344 287 788 77 23 0 0
> 7 0 0 14744 99608 305256 0 0 0 976 154 842 75 24 1 0
> 4 0 0 4988 99608 303244 0 0 0 0 102 460 86 14 0 0
> 4 0 0 20264 99608 303664 0 0 0 0 102 917 75 25 0 0
> 4 0 0 12940 99608 303544 0 0 0 0 102 645 80 20 0 0
>
>2.6.1
>
>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 5 0 0 25528 241032 44500 0 0 0 0 1020 1315 63 37 0 0
> 4 0 0 28528 241032 44840 0 0 0 0 1002 533 87 14 0 0
> 4 0 0 34352 241032 44908 0 0 0 0 1003 804 81 19 0 0
> 4 0 0 25064 241032 44908 0 0 0 0 1003 1312 66 34 0 0
> 1 4 0 26024 241032 44976 0 0 0 92 1007 685 84 15 0 1
> 4 2 0 18152 241032 45248 0 0 0 1364 1186 800 79 19 0 2
> 6 2 0 29288 241092 45392 0 0 0 1088 1158 769 86 14 0 0
> 5 1 0 31208 241200 45352 0 0 0 928 1138 1702 43 40 2 15
> 4 1 0 26728 241200 45488 0 0 0 1388 1182 1148 63 29 0 9
> 4 1 0 23784 241236 45520 0 0 0 1092 1158 823 82 15 0 2
> 8 1 0 30568 241296 45664 0 0 0 988 1145 1561 58 33 1 9
> 4 3 0 28008 241316 45780 0 0 0 1140 1164 1543 55 36 1 9
> 4 1 0 26280 241336 45964 0 0 0 1360 1185 680 72 14 0 13
> 6 1 0 32744 241416 45884 0 0 0 896 1136 1061 72 21 2 7
> 4 0 0 24872 241416 45884 0 0 0 1548 1064 1459 57 38 2 4
> 4 0 0 27176 241416 46156 0 0 0 0 1002 905 78 22 0 0
> 6 0 0 31784 241416 46224 0 0 0 0 1002 1423 63 38 0 0
> 4 0 0 24360 241416 46428 0 0 0 0 1003 735 81 19 0 0
> 5 0 0 29032 241416 46428 0 0 0 0 1003 1083 73 27 0 0
> 1 4 0 25640 241416 46428 0 0 0 1128 1126 1344 62 37 0 2
> 4 1 0 21480 241416 46496 0 0 0 864 1140 822 78 17 1 4
>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 6 2 0 27304 241456 46660 0 0 0 1044 1152 898 80 17 0 2
> 3 2 0 28392 241476 46776 0 0 0 1120 1129 1569 55 37 3 6
> 4 1 0 26920 241476 46980 0 0 0 1016 1179 1050 75 22 0 3
> 5 2 0 25576 241512 48372 0 0 1176 884 1130 767 81 15 0 3
> 2 4 0 26920 241616 50716 0 0 2308 696 1144 1063 66 25 0 8
> 2 3 0 20456 241628 50500 0 0 0 1116 1154 1488 58 35 1 6
> 5 2 0 24616 241648 50888 0 0 0 1256 1181 840 80 19 0 0
> 4 0 0 25704 241668 50936 0 0 0 2108 1168 1562 62 34 1 4
> 5 0 0 20392 241672 50864 0 0 0 0 1030 673 81 19 0 0
> 4 0 0 22184 241672 51204 0 0 0 0 1003 983 75 25 0 0
> 7 0 0 22248 241672 51136 0 0 0 0 1003 1493 60 40 0 0
> 5 0 0 22632 241672 51204 0 0 0 0 1003 1001 73 27 0 0
> 4 2 0 22632 241672 51476 0 0 0 576 1040 1232 67 34 0 0
> 4 2 0 19872 241672 51340 0 0 0 1272 1186 1143 72 28 0 0
> 4 1 0 22112 241672 51476 0 0 0 988 1160 1206 67 27 0 6
> 3 3 0 21408 241692 51524 0 0 0 1364 1182 1563 53 40 1 7
> 8 1 0 21024 241692 51728 0 0 0 1336 1164 1043 74 24 0 2
> 5 1 0 19296 241712 51776 0 0 0 1080 1170 1331 65 29 1 5
> 2 1 0 16224 241728 51760 0 0 0 1140 1177 1036 78 22 0 0
> 4 1 0 20704 241728 52168 0 0 0 1076 1159 798 83 16 1 0
> 5 1 0 20320 241748 52080 0 0 0 1440 1160 1469 63 36 0 1
>
>
Thats weird. It looks like 2.6 is being stalled on writeout.
Are running on all local filesystems? You said a non-RAID ext2
filesystem performed similarly?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 5:15 ` Nick Piggin
@ 2004-02-02 8:58 ` Nick Piggin
2004-02-02 18:36 ` Philip Martin
2004-02-02 18:08 ` Philip Martin
1 sibling, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-02 8:58 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Nick Piggin wrote:
>
>
> Philip Martin wrote:
>
>> Nick Piggin <piggin@cyberone.com.au> writes:
>>
>>
>>> Its got 512MB RAM though so its not swapping, is it?
>>>
>>
>> No, it's not swapping.
>>
>>
>>> Philip, can you please send about 30 seconds of vmstat 1
>>> output for 2.4 and 2.6 while the test is running. Thanks
>>>
>>
>> OK. I rebooted, logged in, shutdown the network, ran find to fill the
>> memory, then did make clean, make -j4, make clean, make -j4. The
>> vmstat numbers are for the middle of the second make -j4. I'm using
>> Debian's procps 3.1.15-1.
>>
>>
>> 2.4.24
>>
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> ----cpu----
>> r b swpd free buff cache si so bi bo in cs us
>> sy id wa
>> 2 2 0 13848 95012 304080 0 0 0 976 263 811 84
>> 16 0 0
>
snip
>> 2.6.1
>>
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> ----cpu----
>> r b swpd free buff cache si so bi bo in cs us
>> sy id wa
>> 5 0 0 25528 241032 44500 0 0 0 0 1020 1315 63
>> 37 0 0
>>
snip
Another thing I just saw - you've got quite a lot of memory in
buffers which might be something going wrong.
When the build finishes and there is no other activity, can you
try applying anonymous memory pressure until it starts swapping
to see if everything gets reclaimed properly?
Was each kernel freshly booted and without background activity
before each compile?
Thanks
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 5:15 ` Nick Piggin
2004-02-02 8:58 ` Nick Piggin
@ 2004-02-02 18:08 ` Philip Martin
1 sibling, 0 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-02 18:08 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> Thats weird. It looks like 2.6 is being stalled on writeout.
> Are running on all local filesystems?
Yes.
> You said a non-RAID ext2 filesystem performed similarly?
Yes, the ext2 build was a little faster in terms of elapsed time, but
it used the same amount of CPU as the RAID/ReiserFS build.
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 8:58 ` Nick Piggin
@ 2004-02-02 18:36 ` Philip Martin
2004-02-02 23:36 ` Nick Piggin
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-02 18:36 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> Another thing I just saw - you've got quite a lot of memory in
> buffers which might be something going wrong.
>
> When the build finishes and there is no other activity, can you
> try applying anonymous memory pressure until it starts swapping
> to see if everything gets reclaimed properly?
How do I apply anonymous memory pressure?
> Was each kernel freshly booted and without background activity
> before each compile?
Each kernel was freshly booted. There were a number of daemons
running, and I was running X, but these don't appear to use much
memory or CPU and the network was disconnected. Just after a boot
there is lots of free memory, but in normal operation the machine uses
its memory, so to make it more like normal I ran "find | grep" before
doing the build. Then I ran make clean, make, make clean, make and
took numbers for the second make.
You can have the numbers straight after a boot as well. In this case
I rebooted, logged in, ran make clean and make -j4.
I can hear disk activity on this machine. During a 2.4.24 build the
activity happens in short bursts a few seconds apart. During a 2.6.1
build it sounds as if there is more activity, with each burst of
activity being a little longer. However that just the impression I
get, I haven't tried timing anything, I may be imagining it.
2.4.24
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
6 0 0 379328 17100 45608 0 0 16 0 115 466 89 11 0 0
5 0 0 388180 17108 45412 0 0 72 0 113 727 75 25 0 0
4 0 0 376448 17108 45540 0 0 48 0 128 602 82 18 0 0
5 0 0 378692 17112 45824 0 0 44 0 111 477 90 10 0 0
3 3 0 376968 17408 46036 0 0 116 592 146 734 80 20 0 0
4 0 0 383832 17924 47080 0 0 36 1484 295 490 80 11 9 0
5 0 0 388620 17928 46268 0 0 36 0 116 933 71 29 0 0
4 0 0 376864 17928 46360 0 0 52 0 116 659 81 19 0 0
5 0 0 389580 17928 46772 0 0 20 0 115 502 86 14 0 0
4 0 0 382452 17928 46656 0 0 68 0 115 1082 66 34 0 0
5 1 0 384800 17980 47000 0 0 4 108 125 296 94 6 0 0
6 1 0 385140 18484 46956 0 0 76 1088 274 1282 62 38 0 0
6 0 0 381352 18904 47272 0 0 20 1544 221 522 88 12 0 0
4 0 0 381636 18904 47448 0 0 104 0 126 829 75 25 0 0
5 0 0 376732 18904 47408 0 0 32 0 114 727 83 17 0 0
5 0 0 384100 18904 47572 0 0 0 0 108 686 76 24 0 0
5 0 0 378904 18908 47724 0 0 100 0 121 897 71 29 0 0
3 2 0 372608 19344 47960 0 0 56 832 198 319 87 7 5 0
8 0 0 385204 19428 48096 0 0 12 656 199 819 80 19 0 0
4 0 0 374144 19428 48312 0 0 80 0 120 801 71 29 0 0
6 0 0 376628 19428 48604 0 0 32 0 109 512 88 12 0 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
6 0 0 380208 19428 48836 0 0 32 0 116 777 78 22 0 0
5 0 0 378016 19428 48800 0 0 32 0 114 679 75 25 0 0
4 1 0 376360 20092 49052 0 0 56 1244 282 723 79 21 0 0
4 0 0 379996 20588 48980 0 0 0 1804 307 888 80 20 0 0
4 0 0 372924 20596 49480 0 0 28 0 140 582 82 18 0 0
5 0 0 382632 20596 49260 0 0 20 0 127 946 72 28 0 0
4 0 0 374536 20596 49704 0 0 20 0 128 662 80 20 0 0
6 0 0 383852 20596 49536 0 0 12 0 110 838 79 21 0 0
4 1 0 371160 20888 49572 0 0 20 576 176 768 75 25 0 0
5 0 0 383564 21236 49800 0 0 12 1280 226 803 79 20 0 0
4 0 0 372592 21236 49908 0 0 108 0 120 919 66 34 0 0
5 0 0 367796 21236 50368 0 0 56 0 118 427 90 10 0 0
5 0 0 373172 21244 50344 0 0 72 0 121 714 78 22 0 0
5 0 0 375740 21244 50500 0 0 64 0 110 498 86 14 0 0
3 2 0 371356 21536 50936 0 0 108 584 175 716 79 21 0 0
5 0 0 374716 21900 51324 0 0 24 1436 229 599 86 14 0 0
5 0 0 375416 21900 51236 0 0 48 0 106 826 78 22 0 0
5 0 0 372428 21900 51116 0 0 16 0 111 582 83 17 0 0
4 0 0 367116 21908 51524 0 0 68 0 118 542 88 12 0 0
5 0 0 372548 21916 51452 0 0 44 0 115 883 78 22 0 0
5 1 0 372824 22240 51596 0 0 32 644 192 426 87 13 0 0
2.6.1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
6 0 0 379508 26148 42532 0 0 72 0 1024 716 81 19 0 0
6 0 0 375028 26148 43076 0 0 64 0 1028 674 84 17 0 0
5 0 0 374196 26148 43144 0 0 12 0 1017 836 81 19 0 0
4 0 0 374580 26156 43068 0 0 112 0 1032 740 83 17 0 0
6 1 0 379188 26560 43140 0 0 12 776 1119 889 79 19 0 2
4 2 0 380076 27260 43392 0 0 20 1368 1178 1225 72 28 0 0
5 3 0 380844 27808 43388 0 0 44 1004 1161 1559 52 33 2 12
3 3 0 375148 28428 43584 0 0 108 1216 1224 1162 71 28 0 2
4 1 0 373036 28984 44184 0 0 36 1068 1217 774 80 19 0 1
6 1 0 380460 29528 44116 0 0 4 1024 1189 950 73 17 2 10
4 0 0 377452 29772 43940 0 0 20 1408 1141 1809 39 45 5 11
4 0 0 375596 29772 44144 0 0 56 0 1052 768 81 19 0 0
5 0 0 380588 29772 44280 0 0 0 0 1020 1088 74 26 0 0
4 0 0 376748 29780 44408 0 0 92 0 1019 1458 62 38 0 0
5 0 0 374956 29784 44676 0 0 4 0 1012 659 85 16 0 0
5 2 0 377004 30080 44584 0 0 28 584 1086 1678 57 41 0 2
3 3 0 370412 30824 44656 0 0 104 1464 1185 804 72 19 0 10
6 3 0 379884 31272 45024 0 0 16 844 1161 995 73 20 0 6
6 2 0 373420 31840 44932 0 0 8 1072 1167 1635 48 35 1 17
3 2 0 369516 32584 45140 0 0 108 1428 1159 1458 58 33 1 7
3 2 0 364268 33072 45196 0 0 48 952 1190 811 71 16 0 13
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
5 3 0 373548 33512 45912 0 0 0 784 1129 710 86 12 0 2
6 3 0 373676 34056 45708 0 0 0 1036 1159 1444 53 32 4 12
1 4 0 368812 34728 45648 0 0 100 1280 1173 1700 44 39 3 16
4 2 0 368044 35336 45992 0 0 32 1176 1186 561 74 10 0 15
6 2 0 366508 35808 45860 0 0 12 868 1143 820 85 13 0 2
0 5 0 371628 36260 46020 0 0 0 812 1136 1732 42 33 6 20
4 1 0 363308 36836 46056 0 0 68 1116 1168 1130 44 25 1 30
4 2 0 360684 37496 46348 0 0 0 1264 1186 675 83 13 2 3
2 1 0 369068 37992 46328 0 0 0 844 1148 1499 58 31 2 9
3 2 0 365868 38588 46480 0 0 44 1188 1173 1468 58 35 0 8
5 1 0 360044 39256 46424 0 0 16 1316 1188 1020 69 26 0 5
0 1 0 367788 39860 46704 0 0 0 1112 1170 967 75 21 2 3
0 4 0 364716 40020 46748 0 0 40 2544 1226 1911 29 48 1 22
4 0 0 355180 40020 46952 0 0 0 0 1079 297 78 8 0 15
4 0 0 366636 40020 47020 0 0 0 0 1038 1381 65 35 0 0
4 1 0 362604 40028 47148 0 0 32 0 1047 1309 66 34 0 0
6 0 0 366060 40032 47484 0 0 12 0 1029 775 81 19 0 0
1 4 0 362220 40580 47276 0 0 76 1080 1146 1845 45 50 1 5
4 1 0 355116 41040 47700 0 0 128 864 1201 548 60 10 1 29
5 0 0 352492 41292 47788 0 0 16 1276 1129 507 92 8 0 0
6 0 0 363436 41296 47852 0 0 20 0 1028 1277 67 33 0 0
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 18:36 ` Philip Martin
@ 2004-02-02 23:36 ` Nick Piggin
2004-02-02 23:49 ` Andrew Morton
2004-02-03 0:34 ` Philip Martin
0 siblings, 2 replies; 32+ messages in thread
From: Nick Piggin @ 2004-02-02 23:36 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Philip Martin wrote:
>Nick Piggin <piggin@cyberone.com.au> writes:
>
>
>>Another thing I just saw - you've got quite a lot of memory in
>>buffers which might be something going wrong.
>>
>>When the build finishes and there is no other activity, can you
>>try applying anonymous memory pressure until it starts swapping
>>to see if everything gets reclaimed properly?
>>
>
>How do I apply anonymous memory pressure?
>
>
Well just run something that uses a lot of memory and doesn't
do much else. Run a few of these if you like:
#include <stdlib.h>
#include <unistd.h>
#define MEMSZ (64 * 1024 * 1024)
int main(void)
{
int i;
char *mem = malloc(MEMSZ);
for (i = 0; i < MEMSZ; i+=4096)
mem[i] = i;
sleep(60);
return 0;
}
>>Was each kernel freshly booted and without background activity
>>before each compile?
>>
>
>Each kernel was freshly booted. There were a number of daemons
>running, and I was running X, but these don't appear to use much
>memory or CPU and the network was disconnected. Just after a boot
>there is lots of free memory, but in normal operation the machine uses
>its memory, so to make it more like normal I ran "find | grep" before
>doing the build. Then I ran make clean, make, make clean, make and
>took numbers for the second make.
>
>You can have the numbers straight after a boot as well. In this case
>I rebooted, logged in, ran make clean and make -j4.
>
>I can hear disk activity on this machine. During a 2.4.24 build the
>activity happens in short bursts a few seconds apart. During a 2.6.1
>build it sounds as if there is more activity, with each burst of
>activity being a little longer. However that just the impression I
>get, I haven't tried timing anything, I may be imagining it.
>
>
Thanks. Much the same, isn't it?
Can you try booting with the kernel argument: elevator=deadline
and see how 2.6 goes?
Andrew, any other ideas?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 23:36 ` Nick Piggin
@ 2004-02-02 23:49 ` Andrew Morton
2004-02-03 1:01 ` Philip Martin
2004-02-03 0:34 ` Philip Martin
1 sibling, 1 reply; 32+ messages in thread
From: Andrew Morton @ 2004-02-02 23:49 UTC (permalink / raw)
To: Nick Piggin; +Cc: philip, linux-kernel
Nick Piggin <piggin@cyberone.com.au> wrote:
>
> Andrew, any other ideas?
There seems to be a lot more writeout happening.
You could try setting /proc/sys/vm/dirty_ratio to 60 and
/proc/sys/vm/dirty_background_ratio to 40.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 23:36 ` Nick Piggin
2004-02-02 23:49 ` Andrew Morton
@ 2004-02-03 0:34 ` Philip Martin
2004-02-03 3:52 ` Nick Piggin
1 sibling, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-03 0:34 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> Philip Martin wrote:
>
>>Nick Piggin <piggin@cyberone.com.au> writes:
>>>
>>>When the build finishes and there is no other activity, can you
>>>try applying anonymous memory pressure until it starts swapping
>>>to see if everything gets reclaimed properly?
>>
>>How do I apply anonymous memory pressure?
>
> Well just run something that uses a lot of memory and doesn't
> do much else. Run a few of these if you like:
>
> #include <stdlib.h>
> #include <unistd.h>
> #define MEMSZ (64 * 1024 * 1024)
> int main(void)
> {
> int i;
> char *mem = malloc(MEMSZ);
> for (i = 0; i < MEMSZ; i+=4096)
> mem[i] = i;
> sleep(60);
> return 0;
> }
This is what free reports after the build
total used free shared buffers cached
Mem: 516396 215328 301068 0 85084 68364
-/+ buffers/cache: 61880 454516
Swap: 1156664 40280 1116384
then after starting 10 instances of the above program
total used free shared buffers cached
Mem: 516396 513028 3368 0 596 5544
-/+ buffers/cache: 506888 9508
Swap: 1156664 320592 836072
and then after those programs finish
total used free shared buffers cached
Mem: 516396 35848 480548 0 964 5720
-/+ buffers/cache: 29164 487232
Swap: 1156664 54356 1102308
It looks OK to me.
>>You can have the numbers straight after a boot as well. In this case
>>I rebooted, logged in, ran make clean and make -j4.
>
> Thanks. Much the same, isn't it?
Yes, it is.
> Can you try booting with the kernel argument: elevator=deadline
> and see how 2.6 goes?
Not much difference, these are times for a build straight after a
reboot:
2.6.
246.22user 120.44system 3:34.26elapsed 171%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (468major+3769185minor)pagefaults 0swaps
2.6.1 elevator=deadline
245.61user 120.31system 3:39.29elapsed 166%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (463major+3770456minor)pagefaults 0swaps
I note that the number of major pagefaults is not zero, I did not spot
that before. In the past I have concentrated on builds when the
system has been running for some time, often having already built the
software one or more times, and in those cases the number of major
pagefaults was always zero, typically
2.6.1
244.08user 116.33system 3:27.40elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3763670minor)pagefaults 0swaps
When running 2.4 the total number of pagefaults is about the same, but
they are split over major and minor
2.4.24
242.27user 81.06system 2:44.18elapsed 196%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (1742270major+1942279minor)pagefaults 0swaps
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-02 23:49 ` Andrew Morton
@ 2004-02-03 1:01 ` Philip Martin
2004-02-03 3:02 ` Nick Piggin
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-03 1:01 UTC (permalink / raw)
To: Andrew Morton; +Cc: Nick Piggin, linux-kernel
Andrew Morton <akpm@osdl.org> writes:
> Nick Piggin <piggin@cyberone.com.au> wrote:
>>
>> Andrew, any other ideas?
>
> There seems to be a lot more writeout happening.
As far as I can see (and hear!) that's true.
> You could try setting /proc/sys/vm/dirty_ratio to 60 and
> /proc/sys/vm/dirty_background_ratio to 40.
Not much different:
2.6.1 (without elevator=deadline)
dirty_ratio:60 dirty_background_ratio:40
245.58user 121.82system 3:31.79elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3771340minor)pagefaults 0swaps
dirty_ratio:40 dirty_background_ratio:10 (the defaults)
245.75user 121.33system 3:35.13elapsed 170%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3770826minor)pagefaults 0swaps
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 1:01 ` Philip Martin
@ 2004-02-03 3:02 ` Nick Piggin
2004-02-03 16:44 ` Philip Martin
0 siblings, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-03 3:02 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Philip Martin wrote:
>Andrew Morton <akpm@osdl.org> writes:
>
>
>>Nick Piggin <piggin@cyberone.com.au> wrote:
>>
>>>Andrew, any other ideas?
>>>
>>There seems to be a lot more writeout happening.
>>
>
>As far as I can see (and hear!) that's true.
>
>
>>You could try setting /proc/sys/vm/dirty_ratio to 60 and
>>/proc/sys/vm/dirty_background_ratio to 40.
>>
>
>Not much different:
>
>2.6.1 (without elevator=deadline)
>
>dirty_ratio:60 dirty_background_ratio:40
>
>245.58user 121.82system 3:31.79elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
>0inputs+0outputs (0major+3771340minor)pagefaults 0swaps
>
>dirty_ratio:40 dirty_background_ratio:10 (the defaults)
>
>245.75user 121.33system 3:35.13elapsed 170%CPU (0avgtext+0avgdata 0maxresident)k
>0inputs+0outputs (0major+3770826minor)pagefaults 0swaps
>
>
>
OK now thats strange - you're definitely compiling the same kernel
with the same .config and compiler? 2.6 looks like its doing twice
the amount of writeout that 2.4 is.
Can you try the memory pressure program I sent you earlier?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-01 21:34 Philip Martin
2004-02-01 23:11 ` Andrew Morton
@ 2004-02-03 3:46 ` Andrew Morton
2004-02-03 16:46 ` Philip Martin
1 sibling, 1 reply; 32+ messages in thread
From: Andrew Morton @ 2004-02-03 3:46 UTC (permalink / raw)
To: Philip Martin; +Cc: linux-kernel, Nick Piggin
Philip Martin <philip@codematters.co.uk> wrote:
>
> My test is a software build of about 200 source files (written in C)
> that I usually build using "nice make -j4". Timing the build on
> 2.4.24 I typically get something like
>
> 242.27user 81.06system 2:44.18elapsed 196%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (1742270major+1942279minor)pagefaults 0swaps
>
> and on 2.6.1 I get
>
> 244.08user 116.33system 3:27.40elapsed 173%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (0major+3763670minor)pagefaults 0swaps
I didn't notice the increase in system time.
Could you generate a kernel profile? Add `profile=1' to the kernel boot
command line and run:
sudo readprofile -r
sudo readprofile -M10
time make -j4
readprofile -n -v -m /boot/System.map | sort -n +2 | tail -40 | tee ~/profile.txt >&2
on both 2.4 and 2.6? Make sure the System.map is appropriate to the
currently-running kernel.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 0:34 ` Philip Martin
@ 2004-02-03 3:52 ` Nick Piggin
0 siblings, 0 replies; 32+ messages in thread
From: Nick Piggin @ 2004-02-03 3:52 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Philip Martin wrote:
>Nick Piggin <piggin@cyberone.com.au> writes:
>
>
>
>>Philip Martin wrote:
>>
>>
>>
>>>Nick Piggin <piggin@cyberone.com.au> writes:
>>>
>>>
>>>>When the build finishes and there is no other activity, can you
>>>>try applying anonymous memory pressure until it starts swapping
>>>>to see if everything gets reclaimed properly?
>>>>
>>>>
>>>How do I apply anonymous memory pressure?
>>>
>>>
>>Well just run something that uses a lot of memory and doesn't
>>do much else. Run a few of these if you like:
>>
>>#include <stdlib.h>
>>#include <unistd.h>
>>#define MEMSZ (64 * 1024 * 1024)
>>int main(void)
>>{
>> int i;
>> char *mem = malloc(MEMSZ);
>> for (i = 0; i < MEMSZ; i+=4096)
>> mem[i] = i;
>> sleep(60);
>> return 0;
>>}
>>
>>
>
>This is what free reports after the build
>
> total used free shared buffers cached
>Mem: 516396 215328 301068 0 85084 68364
>-/+ buffers/cache: 61880 454516
>Swap: 1156664 40280 1116384
>
>then after starting 10 instances of the above program
>
> total used free shared buffers cached
>Mem: 516396 513028 3368 0 596 5544
>-/+ buffers/cache: 506888 9508
>Swap: 1156664 320592 836072
>
>and then after those programs finish
>
> total used free shared buffers cached
>Mem: 516396 35848 480548 0 964 5720
>-/+ buffers/cache: 29164 487232
>Swap: 1156664 54356 1102308
>
>It looks OK to me.
>
>
>
Yeah thats looks fine. It was a wild guess.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
@ 2004-02-03 6:55 Samium Gromoff
2004-02-03 7:07 ` Andrew Morton
2004-02-03 7:13 ` Nick Piggin
0 siblings, 2 replies; 32+ messages in thread
From: Samium Gromoff @ 2004-02-03 6:55 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel, philip
> > The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
> > ReiserFS. It's a few years old and has always run Linux, most
> > recently 2.4.24. I decided to try 2.6.1 and the performance is
> > disappointing.
>
> 2.6 has a few performance problems under heavy pageout at present. Nick
> Piggin has some patches which largely fix it up.
I`m sorry, but this is misguiding. 2.6 does not have a few performance
problems under heavy pageout.
It`s more like _systematical_ _performance_ _degradation_ increasing with
the pageout rate. The more the box pages out the more 2.6 lags behind 2.4.
What i`m trying to say is that even light paging is affected. And light
paging is warranted when you run, say, KDE on 128M ram.
Go measure the X desktop startup time on a 48M/64M boxen--even light paging
causes 2.6 to be just sloower. Also the vm thrashing point is much much earlier.
Ask Roger Luethi for details.
regards, Samium Gromoff
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 6:55 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs Samium Gromoff
@ 2004-02-03 7:07 ` Andrew Morton
2004-02-03 7:52 ` Samium Gromoff
2004-02-03 7:13 ` Nick Piggin
1 sibling, 1 reply; 32+ messages in thread
From: Andrew Morton @ 2004-02-03 7:07 UTC (permalink / raw)
To: Samium Gromoff; +Cc: linux-kernel, philip
Samium Gromoff <deepfire@sic-elvis.zel.ru> wrote:
>
>
> > > The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
> > > ReiserFS. It's a few years old and has always run Linux, most
> > > recently 2.4.24. I decided to try 2.6.1 and the performance is
> > > disappointing.
> >
> > 2.6 has a few performance problems under heavy pageout at present. Nick
> > Piggin has some patches which largely fix it up.
>
> I`m sorry, but this is misguiding. 2.6 does not have a few performance
> problems under heavy pageout.
>
As you have frequently and somewhat redundantly reminded us.
Perhaps you could test Nick's patches. They are at
http://www.kerneltrap.org/~npiggin/vm/
Against 2.6.2-rc2-mm2. First revert vm-rss-limit-enforcement.patch, then
apply those three.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 6:55 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs Samium Gromoff
2004-02-03 7:07 ` Andrew Morton
@ 2004-02-03 7:13 ` Nick Piggin
1 sibling, 0 replies; 32+ messages in thread
From: Nick Piggin @ 2004-02-03 7:13 UTC (permalink / raw)
To: Samium Gromoff; +Cc: akpm, linux-kernel, philip
Samium Gromoff wrote:
>>>The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
>>> ReiserFS. It's a few years old and has always run Linux, most
>>> recently 2.4.24. I decided to try 2.6.1 and the performance is
>>> disappointing.
>>>
>>
>>2.6 has a few performance problems under heavy pageout at present. Nick
>>Piggin has some patches which largely fix it up.
>>
>
>I`m sorry, but this is misguiding. 2.6 does not have a few performance
>problems under heavy pageout.
>
>It`s more like _systematical_ _performance_ _degradation_ increasing with
>the pageout rate. The more the box pages out the more 2.6 lags behind 2.4.
>
>
Well it is a few problems that cause significant performance
regressions. But nevermind semantics...
>What i`m trying to say is that even light paging is affected. And light
>paging is warranted when you run, say, KDE on 128M ram.
>
>Go measure the X desktop startup time on a 48M/64M boxen--even light paging
>causes 2.6 to be just sloower. Also the vm thrashing point is much much earlier.
>
>
Have a look here: http://www.kerneltrap.org/~npiggin/vm/3/
and here: http://www.kerneltrap.org/~npiggin/vm/4/
patches here: http://www.kerneltrap.org/~npiggin/vm/
and I have a couple of things which improve results even more.
True, its only kbuild, but after I do a bit more tuning I'll
focus on other things - I'm hoping most of the improvements
carry over to other cases though.
Tentatively, it looks like 2.6 under very heavy swapping can
actually be significantly improved over 2.4.
>Ask Roger Luethi for details.
>
>
Andrew is quite well versed in the details :)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 7:07 ` Andrew Morton
@ 2004-02-03 7:52 ` Samium Gromoff
2004-02-03 7:57 ` Nick Piggin
0 siblings, 1 reply; 32+ messages in thread
From: Samium Gromoff @ 2004-02-03 7:52 UTC (permalink / raw)
To: Andrew Morton; +Cc: Samium Gromoff, linux-kernel, philip
At Mon, 2 Feb 2004 23:07:45 -0800,
Andrew Morton wrote:
>
> Samium Gromoff <deepfire@sic-elvis.zel.ru> wrote:
> >
> >
> > > > The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
> > > > ReiserFS. It's a few years old and has always run Linux, most
> > > > recently 2.4.24. I decided to try 2.6.1 and the performance is
> > > > disappointing.
> > >
> > > 2.6 has a few performance problems under heavy pageout at present. Nick
> > > Piggin has some patches which largely fix it up.
> >
> > I`m sorry, but this is misguiding. 2.6 does not have a few performance
> > problems under heavy pageout.
> >
>
> As you have frequently and somewhat redundantly reminded us.
Right. I`m rather emotional about this...
Kind of hard seeing the all starry and shiny 2.6 going down the toilet on my
little server with 16M RAM.
> Perhaps you could test Nick's patches. They are at
>
> http://www.kerneltrap.org/~npiggin/vm/
>
> Against 2.6.2-rc2-mm2. First revert vm-rss-limit-enforcement.patch, then
> apply those three.
Hmmm, i`d prefer plain 2.6, but i`ll try it anyway.
regards, Samium Gromoff
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 7:52 ` Samium Gromoff
@ 2004-02-03 7:57 ` Nick Piggin
2004-02-03 15:58 ` Valdis.Kletnieks
0 siblings, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-03 7:57 UTC (permalink / raw)
To: Samium Gromoff; +Cc: Andrew Morton, linux-kernel, philip
Samium Gromoff wrote:
>At Mon, 2 Feb 2004 23:07:45 -0800,
>Andrew Morton wrote:
>
>
>>Samium Gromoff <deepfire@sic-elvis.zel.ru> wrote:
>>
>>
>>>
>>>
>>>>>The machine is a dual P3 450MHz, 512MB, aic7xxx, 2 disk RAID-0 and
>>>>> ReiserFS. It's a few years old and has always run Linux, most
>>>>> recently 2.4.24. I decided to try 2.6.1 and the performance is
>>>>> disappointing.
>>>>>
>>>>>
>>>>
>>>>2.6 has a few performance problems under heavy pageout at present. Nick
>>>>Piggin has some patches which largely fix it up.
>>>>
>>>>
>>>I`m sorry, but this is misguiding. 2.6 does not have a few performance
>>>problems under heavy pageout.
>>>
>>>
>>>
>>As you have frequently and somewhat redundantly reminded us.
>>
>>
>
>Right. I`m rather emotional about this...
>
>Kind of hard seeing the all starry and shiny 2.6 going down the toilet on my
>little server with 16M RAM.
>
>
>
>>Perhaps you could test Nick's patches. They are at
>>
>> http://www.kerneltrap.org/~npiggin/vm/
>>
>>Against 2.6.2-rc2-mm2. First revert vm-rss-limit-enforcement.patch, then
>>apply those three.
>>
>>
>
>Hmmm, i`d prefer plain 2.6, but i`ll try it anyway.
>
>
>
It should go against plain 2.6 with luck.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 7:57 ` Nick Piggin
@ 2004-02-03 15:58 ` Valdis.Kletnieks
0 siblings, 0 replies; 32+ messages in thread
From: Valdis.Kletnieks @ 2004-02-03 15:58 UTC (permalink / raw)
To: Nick Piggin; +Cc: Samium Gromoff, Andrew Morton, linux-kernel, philip
[-- Attachment #1: Type: text/plain, Size: 482 bytes --]
On Tue, 03 Feb 2004 18:57:27 +1100, Nick Piggin said:
> >>Perhaps you could test Nick's patches. They are at
> >>
> >> http://www.kerneltrap.org/~npiggin/vm/
> >>
> >>Against 2.6.2-rc2-mm2. First revert vm-rss-limit-enforcement.patch, then
> >>apply those three.
> >Hmmm, i`d prefer plain 2.6, but i`ll try it anyway.
> It should go against plain 2.6 with luck.
Applies with offsets against 2.6.2-rc3-mm1-1 and boots. Haven't tested it
under high memory pressure yet though.
[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 3:02 ` Nick Piggin
@ 2004-02-03 16:44 ` Philip Martin
0 siblings, 0 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-03 16:44 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> OK now thats strange - you're definitely compiling the same kernel
> with the same .config and compiler?
It's definitely the same compiler (Debian's 2.95.4). I think the 2.4
and 2.6 configs have to be different, but they are similar, these are
the differences
< in 2.4
> in 2.6
< CONFIG_AUTOFS4_FS=y
> CONFIG_BLK_DEV_CRYPTOLOOP=m
> CONFIG_BLK_DEV_IDECD=m
< CONFIG_BLK_DEV_IDE_MODES=y
> CONFIG_CLEAN_COMPILE=y
> CONFIG_CRC32=m
> CONFIG_CRYPTO=y
> CONFIG_DUMMY_CONSOLE=y
> CONFIG_EPOLL=y
> CONFIG_EXPORTFS=m
> CONFIG_FB=y
> CONFIG_FB_ATY=m
> CONFIG_FB_ATY_CT=y
< CONFIG_FILTER=y
> CONFIG_FUTEX=y
> CONFIG_GENERIC_ISA_DMA=y
< CONFIG_HFSPLUS_FS=m
> CONFIG_HW_CONSOLE=y
> CONFIG_INPUT=y
> CONFIG_INPUT_KEYBOARD=y
> CONFIG_INPUT_MOUSE=y
> CONFIG_INPUT_MOUSEDEV=y
> CONFIG_INPUT_MOUSEDEV_PSAUX=y
> CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
> CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
> CONFIG_IOSCHED_AS=y
> CONFIG_IOSCHED_DEADLINE=y
> CONFIG_IOSCHED_NOOP=y
> CONFIG_KALLSYMS=y
< CONFIG_KCORE_ELF=y
> CONFIG_KEYBOARD_ATKBD=y
< CONFIG_LOG_BUF_SHIFT=0
> CONFIG_LOG_BUF_SHIFT=15
> CONFIG_MII=m
> CONFIG_MMU=y
> CONFIG_MOUSE_SERIAL=y
> CONFIG_OBSOLETE_MODPARM=y
> CONFIG_PC=y
> CONFIG_PREEMPT=y
> CONFIG_PROC_KCORE=y
> CONFIG_SCSI_PROC_FS=y
< CONFIG_SD_EXTRA_DEVS=40
< CONFIG_SERIAL=y
> CONFIG_SERIAL_8250=y
> CONFIG_SERIAL_8250_NR_UARTS=4
> CONFIG_SERIAL_CORE=y
> CONFIG_SERIO=y
> CONFIG_SERIO_I8042=y
> CONFIG_SERIO_SERPORT=y
> CONFIG_SOUND_GAMEPORT=y
< CONFIG_SR_EXTRA_DEVS=2
> CONFIG_STANDALONE=y
> CONFIG_SWAP=y
> CONFIG_X86_BIOS_REBOOT=y
> CONFIG_X86_EXTRA_IRQS=y
< CONFIG_X86_F00F_WORKS_OK=y
> CONFIG_X86_FIND_SMP_CONFIG=y
< CONFIG_X86_HAS_TSC=y
> CONFIG_X86_HT=y
> CONFIG_X86_INTEL_USERCOPY=y
> CONFIG_X86_MPPARSE=y
> CONFIG_X86_PC=y
< CONFIG_X86_PGE=y
> CONFIG_X86_SMP=y
> CONFIG_X86_TRAMPOLINE=y
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 3:46 ` Andrew Morton
@ 2004-02-03 16:46 ` Philip Martin
2004-02-03 21:29 ` Andrew Morton
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-03 16:46 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, Nick Piggin
Andrew Morton <akpm@osdl.org> writes:
> Could you generate a kernel profile? Add `profile=1' to the kernel boot
> command line and run:
>
> sudo readprofile -r
> sudo readprofile -M10
> time make -j4
> readprofile -n -v -m /boot/System.map | sort -n +2 | tail -40 | tee ~/profile.txt >&2
>
> on both 2.4 and 2.6? Make sure the System.map is appropriate to the
> currently-running kernel.
2.4.24
239.24user 85.80system 2:50.73elapsed 190%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (1741932major+1948496minor)pagefaults 0swaps
c013bcd8 fput 366 1.5000
c017c154 search_by_key 377 0.1010
c01242dc sys_rt_sigprocmask 399 0.6786
c022b388 strnlen_user 427 4.8523
c013210c kmem_cache_alloc 431 1.3814
c0134968 __alloc_pages 445 0.6825
c012d628 filemap_nopage 449 0.8909
c0133168 delta_nr_inactive_pages 452 5.6500
c0114478 flush_tlb_page 533 4.5948
c012c76c do_generic_file_read 536 0.4573
c012c18c unlock_page 608 5.8462
c013300c lru_cache_add 609 5.2500
c0135fa4 get_swaparea_info 622 1.0436
c012b514 set_page_dirty 630 4.0385
c0129b3c handle_mm_fault 655 3.5598
c0117a68 schedule 678 0.5168
c0119444 mm_init 695 3.4750
c01195dc copy_mm 705 0.9375
c0119d60 do_fork 822 0.4061
c012a894 find_vma 822 9.7857
c01089b0 system_call 865 15.4464
c0135094 free_page_and_swap_cache 899 17.2885
c0145050 link_path_walk 977 0.3972
c01284a0 __free_pte 1016 14.1111
c012c2d8 __find_get_page 1032 16.1250
c0128500 clear_page_tables 1112 4.9643
c014e514 d_lookup 1131 3.9824
c011de3c exit_notify 1216 1.7471
c0134290 __free_pages_ok 1357 1.9956
c012ce8c file_read_actor 1515 10.8214
c0134538 rmqueue 2011 3.3970
c0134c7c __free_pages 2122 66.3125
c012999c do_no_page 2849 6.8486
c0116a3c do_page_fault 3996 3.3922
c01285e0 copy_page_range 4671 10.4263
c01287a0 zap_page_range 5395 6.1029
c01298bc do_anonymous_page 6867 30.6562
c0129364 do_wp_page 20003 37.8845
c0106d60 default_idle 66782 1284.2692
00000000 total 154891 0.1278
2.6.1
248.82user 122.01system 3:37.24elapsed 170%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (474major+3768844minor)pagefaults 0swaps
c011e450 mm_init 441 1.8375
c0114658 sched_clock 452 3.7667
c0108ac8 copy_thread 462 0.8431
c010a79c syscall_call 471 42.8182
c010b1e0 error_code 477 8.5179
c0127a1c del_timer_sync 492 3.7273
c01229e0 wait_task_zombie 535 1.1239
c0121970 put_files_struct 556 2.8958
c010c570 handle_IRQ_event 621 7.0568
c011fbe0 do_fork 642 1.6895
c0128d18 flush_signal_handlers 661 9.7206
c0123ec0 do_softirq 675 3.3088
c011add4 wake_up_forked_process 721 1.7672
c0110d1c old_mmap 866 2.6728
c0122d78 sys_wait4 889 1.5223
c012c638 sys_rt_sigaction 905 3.7090
c01168b8 flush_tlb_mm 1020 6.8919
c012b0a4 get_signal_to_deliver 1374 1.4494
c0123e38 current_kernel_time 1555 22.8676
c011afc8 schedule_tail 1636 9.0889
c01223e8 do_exit 1719 1.7613
c011df4f .text.lock.sched 2228 7.7093
c012b560 sys_rt_sigprocmask 2439 7.0901
c012c0a0 do_sigaction 2709 4.0074
c011e3c8 dup_task_struct 3056 22.4706
c011de6c __preempt_spin_lock 3096 38.7000
c011ec08 copy_files 3133 3.5602
c012b474 sigprocmask 3433 14.5466
c011c2e0 __wake_up 3495 45.9868
c0121d58 exit_notify 3945 2.3482
c0121150 release_task 4053 7.3960
c011bbf0 schedule 6904 4.3150
c011694c flush_tlb_page 7057 44.1063
c011e6d4 copy_mm 7931 7.6851
c010a770 system_call 8361 190.0227
c011efe0 copy_process 8654 2.8171
c0119588 pte_alloc_one 15888 248.2500
c01199b8 do_page_fault 44374 37.2265
c01086b0 default_idle 739276 14216.8462
00000000 total 896883 5.5028
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 16:46 ` Philip Martin
@ 2004-02-03 21:29 ` Andrew Morton
2004-02-03 21:53 ` Philip Martin
2004-02-14 0:10 ` Philip Martin
0 siblings, 2 replies; 32+ messages in thread
From: Andrew Morton @ 2004-02-03 21:29 UTC (permalink / raw)
To: Philip Martin; +Cc: linux-kernel, piggin
Philip Martin <philip@codematters.co.uk> wrote:
>
> Andrew Morton <akpm@osdl.org> writes:
>
> > Could you generate a kernel profile? Add `profile=1' to the kernel boot
> ...
> 2.4.24
OK.
> 2.6.1
Odd. Are you really sure that it was the correct System.map?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 21:29 ` Andrew Morton
@ 2004-02-03 21:53 ` Philip Martin
2004-02-04 5:48 ` Nick Piggin
2004-02-14 0:10 ` Philip Martin
1 sibling, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-03 21:53 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, piggin
Andrew Morton <akpm@osdl.org> writes:
>> 2.6.1
>
> Odd. Are you really sure that it was the correct System.map?
I think so. I always build kernels using Debian's kernel-package so
both vmlinuz and System.map get placed into a .deb package as
vmlinuz-2.6.1 and System.map-2.6.1.
$ ls -l /boot/Sys*
-rw-r--r-- 1 root root 492205 Dec 1 19:27 /boot/System.map-2.4.23
-rw-r--r-- 1 root root 492205 Jan 5 21:21 /boot/System.map-2.4.24
-rw-r--r-- 1 root root 715800 Feb 1 21:02 /boot/System.map-2.6.1
$ ls -l /boot/vm*
-rw-r--r-- 1 root root 880826 Dec 1 19:27 /boot/vmlinuz-2.4.23
-rw-r--r-- 1 root root 880822 Jan 5 21:21 /boot/vmlinuz-2.4.24
-rw-r--r-- 1 root root 1095040 Feb 1 21:02 /boot/vmlinuz-2.6.1
Hmm, I see that my 2.6.1 image is 25% bigger than 2.4.24, I'd not
noticed that before.
I have just tried another 2.6 profile run and got similar results.
248.88user 122.00system 3:41.18elapsed 167%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (453major+3770323minor)pagefaults 0swaps
c011e158 add_wait_queue 442 5.5250
c010b1e0 error_code 444 7.9286
c0114658 sched_clock 460 3.8333
c0108ac8 copy_thread 473 0.8631
c01229e0 wait_task_zombie 488 1.0252
c010a79c syscall_call 489 44.4545
c0127a1c del_timer_sync 529 4.0076
c0128d18 flush_signal_handlers 592 8.7059
c010c570 handle_IRQ_event 597 6.7841
c011fbe0 do_fork 606 1.5947
c0121970 put_files_struct 616 3.2083
c0123ec0 do_softirq 617 3.0245
c011add4 wake_up_forked_process 776 1.9020
c0110d1c old_mmap 868 2.6790
c012c638 sys_rt_sigaction 942 3.8607
c0122d78 sys_wait4 958 1.6404
c01168b8 flush_tlb_mm 1040 7.0270
c012b0a4 get_signal_to_deliver 1385 1.4610
c0123e38 current_kernel_time 1487 21.8676
c011afc8 schedule_tail 1606 8.9222
c01223e8 do_exit 1807 1.8514
c011df4f .text.lock.sched 2302 7.9654
c012b560 sys_rt_sigprocmask 2417 7.0262
c012c0a0 do_sigaction 2736 4.0473
c011e3c8 dup_task_struct 3034 22.3088
c011ec08 copy_files 3103 3.5261
c011de6c __preempt_spin_lock 3171 39.6375
c012b474 sigprocmask 3387 14.3517
c011c2e0 __wake_up 3699 48.6711
c0121d58 exit_notify 3780 2.2500
c0121150 release_task 4071 7.4288
c011694c flush_tlb_page 7069 44.1812
c011bbf0 schedule 7123 4.4519
c011e6d4 copy_mm 7826 7.5833
c010a770 system_call 8249 187.4773
c011efe0 copy_process 8760 2.8516
c0119588 pte_alloc_one 16097 251.5156
c01199b8 do_page_fault 44492 37.3255
c01086b0 default_idle 937243 18023.9038
00000000 total 1095569 6.7218
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 21:53 ` Philip Martin
@ 2004-02-04 5:48 ` Nick Piggin
2004-02-04 17:50 ` Philip Martin
0 siblings, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-04 5:48 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1577 bytes --]
Philip Martin wrote:
>Andrew Morton <akpm@osdl.org> writes:
>
>
>>>2.6.1
>>>
>>Odd. Are you really sure that it was the correct System.map?
>>
>
>I think so. I always build kernels using Debian's kernel-package so
>both vmlinuz and System.map get placed into a .deb package as
>vmlinuz-2.6.1 and System.map-2.6.1.
>
>$ ls -l /boot/Sys*
>-rw-r--r-- 1 root root 492205 Dec 1 19:27 /boot/System.map-2.4.23
>-rw-r--r-- 1 root root 492205 Jan 5 21:21 /boot/System.map-2.4.24
>-rw-r--r-- 1 root root 715800 Feb 1 21:02 /boot/System.map-2.6.1
>
>$ ls -l /boot/vm*
>-rw-r--r-- 1 root root 880826 Dec 1 19:27 /boot/vmlinuz-2.4.23
>-rw-r--r-- 1 root root 880822 Jan 5 21:21 /boot/vmlinuz-2.4.24
>-rw-r--r-- 1 root root 1095040 Feb 1 21:02 /boot/vmlinuz-2.6.1
>
>Hmm, I see that my 2.6.1 image is 25% bigger than 2.4.24, I'd not
>noticed that before.
>
>
That's progress for you...
>I have just tried another 2.6 profile run and got similar results.
>
>248.88user 122.00system 3:41.18elapsed 167%CPU (0avgtext+0avgdata 0maxresident)k
>0inputs+0outputs (453major+3770323minor)pagefaults 0swaps
>
>
Thanks for your patience. What are you building, by the way? It
slipped my mind.
You could try an experimental VM patch out if you're feeling brave.
Don't know if it will do you any good or not. You'll have to use
this patch against the 2.6.2-rc3-mm1 kernel.
What I really want to know though, is why it appears like 2.6 is
doing twice as much writeout even at the same vm thresholds as 2.4.
Nick
[-- Attachment #2: vm-swap-3.gz --]
[-- Type: application/x-tar, Size: 7298 bytes --]
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-04 5:48 ` Nick Piggin
@ 2004-02-04 17:50 ` Philip Martin
2004-02-04 23:38 ` Philip Martin
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-04 17:50 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> What are you building, by the way? It slipped my mind.
All the 2.6 figures so far are for a plain 2.6.1. I've just switched
to 2.6.2 and at first glance it's the same as 2.6.1.
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-04 17:50 ` Philip Martin
@ 2004-02-04 23:38 ` Philip Martin
2004-02-05 2:49 ` Nick Piggin
0 siblings, 1 reply; 32+ messages in thread
From: Philip Martin @ 2004-02-04 23:38 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Philip Martin <philip@codematters.co.uk> writes:
> Nick Piggin <piggin@cyberone.com.au> writes:
>
>> What are you building, by the way? It slipped my mind.
>
> All the 2.6 figures so far are for a plain 2.6.1. I've just switched
> to 2.6.2 and at first glance it's the same as 2.6.1.
This is the profile for 2.6.2, it is very much like 2.6.1
248.07user 118.81system 3:42.00elapsed 165%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (434major+3770493minor)pagefaults 0swaps
c0109e28 setup_frame 448 1.0667
c010aff4 error_code 449 8.0179
c0108a48 copy_thread 459 0.8376
c012591c __mod_timer 463 1.7276
c010a5b0 syscall_call 484 44.0000
c0119f8c wake_up_forked_process 492 1.3516
c01200b4 put_files_struct 534 2.8404
c0120f80 wait_task_zombie 559 1.4116
c011e598 do_fork 604 1.6237
c0126dd0 flush_signal_handlers 607 8.9265
c010c320 handle_IRQ_event 620 7.0455
c0122430 do_softirq 642 3.1471
c0125b44 del_timer_sync 700 2.5362
c0115e58 flush_tlb_mm 814 6.5645
c011063c old_mmap 895 2.7623
c012a1a8 sys_rt_sigaction 926 3.7951
c012127c sys_wait4 929 1.6017
c010a4d8 ret_from_intr 1275 45.5357
c0128e44 get_signal_to_deliver 1482 1.8162
c0120a68 do_exit 1486 1.8668
c0122388 current_kernel_time 1518 22.3235
c011a158 schedule_tail 1520 9.2683
c011cbb7 .text.lock.sched 2291 4.5366
c0129270 sys_rt_sigprocmask 2462 7.5988
c0129c84 do_sigaction 2843 4.8682
c011d888 copy_files 2869 4.2693
c011d0f8 dup_task_struct 3001 22.0662
c011b520 __wake_up 3053 69.3864
c0129190 sigprocmask 3345 14.9330
c011f9c0 release_task 3593 7.6123
c012041c exit_notify 4023 2.4957
c011aec0 schedule 6625 4.3357
c0115ed4 flush_tlb_page 7025 54.8828
c011db90 copy_process 7663 2.9840
c011d3b8 copy_mm 7704 8.0250
c010a584 system_call 8314 188.9545
c01188a8 pte_alloc_one 15944 249.1250
c0118c78 do_page_fault 44368 37.7279
c0108690 default_idle 1351419 25988.8269
00000000 total 1503962 9.7998
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-04 23:38 ` Philip Martin
@ 2004-02-05 2:49 ` Nick Piggin
2004-02-05 14:27 ` Philip Martin
0 siblings, 1 reply; 32+ messages in thread
From: Nick Piggin @ 2004-02-05 2:49 UTC (permalink / raw)
To: Philip Martin; +Cc: Andrew Morton, linux-kernel
Philip Martin wrote:
>Philip Martin <philip@codematters.co.uk> writes:
>
>
>>Nick Piggin <piggin@cyberone.com.au> writes:
>>
>>
>>>What are you building, by the way? It slipped my mind.
>>>
>>All the 2.6 figures so far are for a plain 2.6.1. I've just switched
>>to 2.6.2 and at first glance it's the same as 2.6.1.
>>
>
Sorry, I mean what is it that you are timing?
>This is the profile for 2.6.2, it is very much like 2.6.1
>
>248.07user 118.81system 3:42.00elapsed 165%CPU (0avgtext+0avgdata 0maxresident)k
>0inputs+0outputs (434major+3770493minor)pagefaults 0swaps
>
>
If you get time, could you test the patch I sent you?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-05 2:49 ` Nick Piggin
@ 2004-02-05 14:27 ` Philip Martin
0 siblings, 0 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-05 14:27 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-kernel
Nick Piggin <piggin@cyberone.com.au> writes:
> Sorry, I mean what is it that you are timing?
It's a bit of software (Subversion) built using "make -j4". It
consists of a little over 200 C files compiled to object code, then
linked to about a dozen shared libraries, and finally linked to create
over a dozen executables. It uses libtool, so each compile/link
involves running a bit of shell code before runing gcc. It lends
itself to parallel builds, on 2.4 there is little difference in the
build time using -j2, -j4, -j8. The source code is about 16MB and the
object/library/executable about 28MB.
>>This is the profile for 2.6.2, it is very much like 2.6.1
>>
>>248.07user 118.81system 3:42.00elapsed 165%CPU (0avgtext+0avgdata 0maxresident)k
>>0inputs+0outputs (434major+3770493minor)pagefaults 0swaps
>
> If you get time, could you test the patch I sent you?
Your patch doesn't apply to plain 2.6.2. I got 2.6.2-mm1 and it looks
like that already includes your patch, correct? This is what I got
for 2.6.2-mm1
247.02user 118.33system 3:51.24elapsed 157%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (176major+3771994minor)pagefaults 0swaps
so it's not really an improvement on plain 2.6.2.
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs
2004-02-03 21:29 ` Andrew Morton
2004-02-03 21:53 ` Philip Martin
@ 2004-02-14 0:10 ` Philip Martin
1 sibling, 0 replies; 32+ messages in thread
From: Philip Martin @ 2004-02-14 0:10 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, piggin
Andrew Morton <akpm@osdl.org> writes:
> Philip Martin <philip@codematters.co.uk> wrote:
>>
>> Andrew Morton <akpm@osdl.org> writes:
>>
>> > Could you generate a kernel profile? Add `profile=1' to the kernel boot
>> ...
>> 2.4.24
>
> OK.
>
>> 2.6.1
>
> Odd. Are you really sure that it was the correct System.map?
I'm reasonably confident that I am, but the 2.6 numbers still look
odd, I don't know why. So I've installed oprofile and used that to
profile instead; thus same problem different numbers.
As before I'm timing a software build (using make -j4) and it's slower
on 2.6 than 2.4 and it appears increased system CPU is the problem.
It's a dual P3 450MHz, 512MB ram, 2-disk aic7xxx SCSI RAID-0 and it's
not swapping. Typical timings are
kernel 2.4.24
239.24user 85.80system 2:50.73elapsed 190%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (1741932major+1948496minor)pagefaults 0swaps
kernel 2.6.3-rc2
248.82user 122.01system 3:37.24elapsed 170%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (474major+3768844minor)pagefaults 0swaps
This is oprofile report for 2.4.24
CPU: PIII, speed 451.03 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % app name symbol name
130946 8.7017 bash (no symbols)
89695 5.9604 vmlinux-2.4.24 do_wp_page
88996 5.9140 as (no symbols)
47436 3.1522 ld-2.2.5.so _dl_lookup_versioned_symbol
38161 2.5359 libbfd-2.14.90.0.7.so (no symbols)
35216 2.3402 cc1 yyparse
29175 1.9387 vmlinux-2.4.24 do_anonymous_page
24594 1.6343 vmlinux-2.4.24 zap_page_range
23044 1.5313 vmlinux-2.4.24 copy_page_range
22401 1.4886 libc-2.2.5.so memset
21343 1.4183 ld-2.2.5.so _dl_relocate_object
21111 1.4029 cc1 skip_block_comment
20104 1.3360 libc-2.2.5.so chunk_alloc
19883 1.3213 cc1 ht_lookup
17248 1.1462 cc1 _cpp_lex_direct
14680 0.9755 libc-2.2.5.so _IO_vfprintf
14158 0.9408 cc1 grokdeclarator
13853 0.9206 cc1 ggc_alloc
13838 0.9196 libc-2.2.5.so chunk_free
13433 0.8927 libc-2.2.5.so __malloc
13431 0.8925 ld-2.2.5.so strcmp
13259 0.8811 vmlinux-2.4.24 do_no_page
11993 0.7970 libc-2.2.5.so strncpy
11640 0.7735 libc-2.2.5.so strcmp
9912 0.6587 vmlinux-2.4.24 machine_check
9537 0.6338 vmlinux-2.4.24 nr_free_pages
9300 0.6180 cc1 parse_identifier
8977 0.5965 vmlinux-2.4.24 rmqueue
8935 0.5938 libc-2.2.5.so _IO_new_file_xsputn
8092 0.5377 libc-2.2.5.so memcpy
7824 0.5199 cc1 calc_hash
7496 0.4981 cc1 find_reloads
7144 0.4747 cc1 htab_find_slot_with_hash
6867 0.4563 vmlinux-2.4.24 file_read_actor
6670 0.4432 cc1 record_reg_classes
6597 0.4384 vmlinux-2.4.24 do_page_fault
6404 0.4256 libc-2.2.5.so strcpy
and this is 2.6.3-rc2
CPU: PIII, speed 451.163 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % app name symbol name
137869 7.8626 bash (no symbols)
95232 5.4310 vmlinux-2.6.3-rc2 do_wp_page
89606 5.1102 as (no symbols)
62052 3.5388 vmlinux-2.6.3-rc2 default_idle
47196 2.6916 ld-2.2.5.so _dl_lookup_versioned_symbol
41176 2.3482 vmlinux-2.6.3-rc2 page_add_rmap
38747 2.2097 libbfd-2.14.90.0.7.so (no symbols)
35483 2.0236 cc1 yyparse
32590 1.8586 vmlinux-2.6.3-rc2 do_anonymous_page
32224 1.8377 vmlinux-2.6.3-rc2 copy_page_range
22685 1.2937 libc-2.2.5.so memset
21935 1.2509 vmlinux-2.6.3-rc2 __copy_to_user_ll
21475 1.2247 ld-2.2.5.so _dl_relocate_object
20979 1.1964 cc1 skip_block_comment
20938 1.1941 libc-2.2.5.so chunk_alloc
19628 1.1194 cc1 ht_lookup
17279 0.9854 vmlinux-2.6.3-rc2 page_remove_rmap
17140 0.9775 cc1 _cpp_lex_direct
16122 0.9194 vmlinux-2.6.3-rc2 do_no_page
14690 0.8378 libc-2.2.5.so _IO_vfprintf
14689 0.8377 libc-2.2.5.so chunk_free
14300 0.8155 cc1 grokdeclarator
14164 0.8078 libc-2.2.5.so __malloc
14001 0.7985 cc1 ggc_alloc
13678 0.7800 ld-2.2.5.so strcmp
12038 0.6865 libc-2.2.5.so strncpy
11770 0.6712 libc-2.2.5.so strcmp
10788 0.6152 vmlinux-2.6.3-rc2 mark_offset_tsc
10258 0.5850 vmlinux-2.6.3-rc2 page_fault
9848 0.5616 libc-2.2.5.so memcpy
9581 0.5464 cc1 parse_identifier
9210 0.5252 vmlinux-2.6.3-rc2 zap_pte_range
8994 0.5129 libc-2.2.5.so _IO_new_file_xsputn
8005 0.4565 cc1 calc_hash
7681 0.4380 cc1 find_reloads
7564 0.4314 vmlinux-2.6.3-rc2 pte_alloc_one
7446 0.4246 vmlinux-2.6.3-rc2 do_page_fault
extracting just the vmlinux bits I get this for 2.4.24
89695 5.9604 vmlinux-2.4.24 do_wp_page
29175 1.9387 vmlinux-2.4.24 do_anonymous_page
24594 1.6343 vmlinux-2.4.24 zap_page_range
23044 1.5313 vmlinux-2.4.24 copy_page_range
13259 0.8811 vmlinux-2.4.24 do_no_page
9912 0.6587 vmlinux-2.4.24 machine_check
9537 0.6338 vmlinux-2.4.24 nr_free_pages
8977 0.5965 vmlinux-2.4.24 rmqueue
6867 0.4563 vmlinux-2.4.24 file_read_actor
6597 0.4384 vmlinux-2.4.24 do_page_fault
6166 0.4097 vmlinux-2.4.24 default_idle
6001 0.3988 vmlinux-2.4.24 __free_pages_ok
5404 0.3591 vmlinux-2.4.24 find_trylock_page
5179 0.3442 vmlinux-2.4.24 lookup_swap_cache
4969 0.3302 vmlinux-2.4.24 exit_notify
4928 0.3275 vmlinux-2.4.24 clear_page_tables
4830 0.3210 vmlinux-2.4.24 d_lookup
3843 0.2554 vmlinux-2.4.24 link_path_walk
3714 0.2468 vmlinux-2.4.24 system_call
3549 0.2358 vmlinux-2.4.24 do_fork
3340 0.2220 vmlinux-2.4.24 copy_mm
3293 0.2188 vmlinux-2.4.24 find_vma_prev
3237 0.2151 vmlinux-2.4.24 schedule
3226 0.2144 vmlinux-2.4.24 do_generic_file_read
3198 0.2125 vmlinux-2.4.24 handle_mm_fault
3096 0.2057 vmlinux-2.4.24 mm_init
3067 0.2038 vmlinux-2.4.24 set_page_dirty
2727 0.1812 vmlinux-2.4.24 get_swaparea_info
2348 0.1560 vmlinux-2.4.24 flush_tlb_page
2143 0.1424 vmlinux-2.4.24 filemap_nopage
2083 0.1384 vmlinux-2.4.24 lru_cache_add
2051 0.1363 vmlinux-2.4.24 __free_pte
1923 0.1278 vmlinux-2.4.24 search_by_key
1777 0.1181 vmlinux-2.4.24 error_code
1735 0.1153 vmlinux-2.4.24 kmem_cache_alloc
1733 0.1152 vmlinux-2.4.24 do_generic_file_write
1702 0.1131 vmlinux-2.4.24 __get_user_2
1627 0.1081 vmlinux-2.4.24 __alloc_pages
1602 0.1065 vmlinux-2.4.24 sys_rt_sigprocmask
1546 0.1027 vmlinux-2.4.24 is_leaf
and this for 2.6.3-rc2
95232 5.4310 vmlinux-2.6.3-rc2 do_wp_page
62052 3.5388 vmlinux-2.6.3-rc2 default_idle
41176 2.3482 vmlinux-2.6.3-rc2 page_add_rmap
32590 1.8586 vmlinux-2.6.3-rc2 do_anonymous_page
32224 1.8377 vmlinux-2.6.3-rc2 copy_page_range
21935 1.2509 vmlinux-2.6.3-rc2 __copy_to_user_ll
17279 0.9854 vmlinux-2.6.3-rc2 page_remove_rmap
16122 0.9194 vmlinux-2.6.3-rc2 do_no_page
10788 0.6152 vmlinux-2.6.3-rc2 mark_offset_tsc
10258 0.5850 vmlinux-2.6.3-rc2 page_fault
9210 0.5252 vmlinux-2.6.3-rc2 zap_pte_range
7564 0.4314 vmlinux-2.6.3-rc2 pte_alloc_one
7446 0.4246 vmlinux-2.6.3-rc2 do_page_fault
6308 0.3597 vmlinux-2.6.3-rc2 handle_mm_fault
5878 0.3352 vmlinux-2.6.3-rc2 __d_lookup
5688 0.3244 vmlinux-2.6.3-rc2 release_pages
5181 0.2955 vmlinux-2.6.3-rc2 schedule
5021 0.2863 vmlinux-2.6.3-rc2 do_journal_end
4899 0.2794 vmlinux-2.6.3-rc2 find_vma
4576 0.2610 vmlinux-2.6.3-rc2 link_path_walk
4517 0.2576 vmlinux-2.6.3-rc2 buffered_rmqueue
4490 0.2561 vmlinux-2.6.3-rc2 find_get_page
3966 0.2262 vmlinux-2.6.3-rc2 search_by_key
3891 0.2219 vmlinux-2.6.3-rc2 system_call
3867 0.2205 vmlinux-2.6.3-rc2 is_leaf
3829 0.2184 vmlinux-2.6.3-rc2 copy_mm
3405 0.1942 vmlinux-2.6.3-rc2 flush_tlb_page
3405 0.1942 vmlinux-2.6.3-rc2 kmem_cache_alloc
3300 0.1882 vmlinux-2.6.3-rc2 scheduler_tick
3297 0.1880 vmlinux-2.6.3-rc2 __copy_from_user_ll
3286 0.1874 vmlinux-2.6.3-rc2 .text.lock.sched
3200 0.1825 vmlinux-2.6.3-rc2 copy_process
3194 0.1822 vmlinux-2.6.3-rc2 filemap_nopage
3081 0.1757 vmlinux-2.6.3-rc2 timer_interrupt
2779 0.1585 vmlinux-2.6.3-rc2 pte_alloc_map
2578 0.1470 vmlinux-2.6.3-rc2 radix_tree_lookup
2368 0.1350 vmlinux-2.6.3-rc2 unlock_page
2337 0.1333 vmlinux-2.6.3-rc2 __alloc_pages
2251 0.1284 vmlinux-2.6.3-rc2 restore_all
2187 0.1247 vmlinux-2.6.3-rc2 init_journal_hash
--
Philip Martin
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2004-02-14 0:11 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-02-03 6:55 2.6.1 slower than 2.4, smp/scsi/sw-raid/reiserfs Samium Gromoff
2004-02-03 7:07 ` Andrew Morton
2004-02-03 7:52 ` Samium Gromoff
2004-02-03 7:57 ` Nick Piggin
2004-02-03 15:58 ` Valdis.Kletnieks
2004-02-03 7:13 ` Nick Piggin
-- strict thread matches above, loose matches on Subject: below --
2004-02-01 21:34 Philip Martin
2004-02-01 23:11 ` Andrew Morton
2004-02-01 23:42 ` Philip Martin
2004-02-01 23:52 ` Nick Piggin
2004-02-02 0:51 ` Philip Martin
2004-02-02 5:15 ` Nick Piggin
2004-02-02 8:58 ` Nick Piggin
2004-02-02 18:36 ` Philip Martin
2004-02-02 23:36 ` Nick Piggin
2004-02-02 23:49 ` Andrew Morton
2004-02-03 1:01 ` Philip Martin
2004-02-03 3:02 ` Nick Piggin
2004-02-03 16:44 ` Philip Martin
2004-02-03 0:34 ` Philip Martin
2004-02-03 3:52 ` Nick Piggin
2004-02-02 18:08 ` Philip Martin
2004-02-03 3:46 ` Andrew Morton
2004-02-03 16:46 ` Philip Martin
2004-02-03 21:29 ` Andrew Morton
2004-02-03 21:53 ` Philip Martin
2004-02-04 5:48 ` Nick Piggin
2004-02-04 17:50 ` Philip Martin
2004-02-04 23:38 ` Philip Martin
2004-02-05 2:49 ` Nick Piggin
2004-02-05 14:27 ` Philip Martin
2004-02-14 0:10 ` Philip Martin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox