* [VM] 2.4.14/15-pre4 too "swap-happy"?
@ 2001-11-14 12:44 Sebastian Dröge
2001-11-14 15:00 ` Rik van Riel
0 siblings, 1 reply; 30+ messages in thread
From: Sebastian Dröge @ 2001-11-14 12:44 UTC (permalink / raw)
To: linux-kernel, torvalds
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Are there any paramters (for example in /proc/sys/vm), which make the VM less
swap-happy?
My problems are following:
I burn a CD-R on system 1:
...
- ---Swap: 0 KB
mkisofs blablabla
- ---swap begins to rise ;)
mkisofs finished
- ---swap: 3402 KB
cdrecord speed=12 blablabla (FIFO is 4 MB)
- ---heavy swapping
cdrecord finished
- ---swap: 27421 KB
The system has 256 MB RAM, nothing RAM-eating in the background I got many
buffer-underuns just because of swapping. When I turn swap off everything
works fine. I think it's something with the cache.
Leaving system 2 alone, just play mp3s over nfs:
After two or three days the used swap-space is around 3 MB. I just played
MP3s and no X and no other "big" applications were running. This isn't really
a problem but it doesn't look good. Just because of cache swap gets full :(
I think this must be fixed before opening 2.5.
It isn't good when something gets swapped out just because of the cache.
It'll be better if the cache gets lesser priority.
system 1:
Kernel 2.4.15-pre4
Intel Pentium II @ 350 MHZ
256 MB RAM
512 MB Swap
system 2:
Kernel 2.4.14
AMD K6-2 @ 350 MHZ
128 MB RAM
256 MB Swap
If you need some more system information contact me and I'll post them
I'll be happy to test all your patches/suggestions :)
Thanks in advance
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE78mc+vIHrJes3kVIRAhooAJ4mp52iyrIkRPe/wicwrSxmIwmvYQCgg/NQ
MW522KOtGdhPdjRVbXwLrko=
=TlFS
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-14 12:44 Sebastian Dröge
@ 2001-11-14 15:00 ` Rik van Riel
0 siblings, 0 replies; 30+ messages in thread
From: Rik van Riel @ 2001-11-14 15:00 UTC (permalink / raw)
To: Sebastian Dröge; +Cc: linux-kernel
On Wed, 14 Nov 2001, Sebastian Dröge wrote:
> After two or three days the used swap-space is around 3 MB. I just
> played MP3s and no X and no other "big" applications were running.
> This isn't really a problem but it doesn't look good. Just because of
> cache swap gets full :(
"This isn't really a problem" is a good analysis of the
situation, since 2.4.14/15 often have data duplicated
in both swap and RAM, so being 3MB into swap doesn't mean
3MB of your programs isn't in RAM.
If you take a look at /proc/meminfo, you'll find a field
called "SwapCached:", which tells you exactly how much
of your memory is duplicated in both swap and RAM.
regards,
Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/
http://www.surriel.com/ http://distro.conectiva.com/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
[not found] <200111141243.fAEChS915731@neosilicon.transmeta.com>
@ 2001-11-14 16:34 ` Linus Torvalds
2001-11-19 18:01 ` Sebastian Dröge
2001-11-19 18:18 ` Simon Kirby
0 siblings, 2 replies; 30+ messages in thread
From: Linus Torvalds @ 2001-11-14 16:34 UTC (permalink / raw)
To: Sebastian Dröge; +Cc: linux-kernel
On Wed, 14 Nov 2001, Sebastian Dröge wrote:
>
> The system has 256 MB RAM, nothing RAM-eating in the background I got many
> buffer-underuns just because of swapping. When I turn swap off everything
> works fine. I think it's something with the cache.
Can you do some statistics for me:
cat /proc/meminfo
cat /proc/slabinfo
vmstat 1
while the worst swapping is going on..
> Leaving system 2 alone, just play mp3s over nfs:
>
> After two or three days the used swap-space is around 3 MB. I just played
> MP3s and no X and no other "big" applications were running. This isn't really
> a problem but it doesn't look good. Just because of cache swap gets full :(
That's normal and usually good. It's supposed to swap stuff out if it
really isn't needed, and that improves performance. Cache _is_ more
important than swap if the cache is active.
HOWEVER, there's probably something in your system that triggers this too
easily. Heavy NFS usage will do that, for example - as mentioned in
another thread on linux-kernel, the VM doesn't really understand
writebacks and asynchronous reads from filesystems that don't use buffers,
and so sometimes the heuristics get confused simply because NFS activity
can _look_ like page mapping to the VM.
Your MP3-over-NFS doesn't sound bad, though. 3MB of swap is perfectly
normal: that tends to be just the idle deamons etc which really _should_
go to swapspace.
The cdrecord thing is something else, though. I'd love to see the
statistics from that. Although it can, of course, just be another SCSI
allocation strangeness.
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
@ 2001-11-15 8:38 janne
2001-11-15 9:05 ` janne
0 siblings, 1 reply; 30+ messages in thread
From: janne @ 2001-11-15 8:38 UTC (permalink / raw)
To: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 8490 bytes --]
i've noticed some weirdness too.
i'm using 2.4.15-pre1 on a 1.4ghz athlon kt266a system with 1GB ram.
i first noticed this as i was copying large amount of data (10+ gigs)
across two reiserfs partitions and during that copy uniconified one
mozilla window which took about 30 seconds before it finished redrawing
the window and became responsive again.
to me it seems that latest 2.4 kernels are too eager to swapout things
to make room for cache on machines with _lots_ of memory.
this time i tried the same thing on a freshly rebooted machine.
uniconified mozilla when i was 35megs on swap, this time it took
under 10 secs to become responsive, but on the other hand it had
just been running a few minutes so it's memory footprint was still
quite small.
imho we should not be swapping so eagerly when there is lots of memory
and already 800+ megs used for cache...
attached vmstat log from start of copying to uniconifying mozilla.
slabinfo and meminfo before copying:
slabinfo - version: 1.1
kmem_cache 58 68 112 2 2 1
tcp_tw_bucket 3 30 128 1 1 1
tcp_bind_bucket 16 112 32 1 1 1
tcp_open_request 0 59 64 0 1 1
inet_peer_cache 0 0 64 0 0 1
ip_fib_hash 9 112 32 1 1 1
ip_dst_cache 71 80 192 4 4 1
arp_cache 2 30 128 1 1 1
urb_priv 1 59 64 1 1 1
blkdev_requests 512 540 128 18 18 1
nfs_read_data 0 0 384 0 0 1
nfs_write_data 0 0 384 0 0 1
nfs_page 0 0 128 0 0 1
dnotify cache 0 0 20 0 0 1
file lock cache 2 42 92 1 1 1
fasync cache 1 202 16 1 1 1
uid_cache 2 112 32 1 1 1
skbuff_head_cache 129 140 192 7 7 1
sock 131 144 832 16 16 2
sigqueue 0 29 132 0 1 1
cdev_cache 286 295 64 5 5 1
bdev_cache 4 59 64 1 1 1
mnt_cache 12 59 64 1 1 1
inode_cache 2853 2870 512 409 410 1
dentry_cache 3957 3990 128 132 133 1
filp 1168 1170 128 39 39 1
names_cache 0 7 4096 0 7 1
buffer_head 19989 20010 128 667 667 1
mm_struct 51 60 192 3 3 1
vm_area_struct 2154 2340 128 73 78 1
fs_cache 50 59 64 1 1 1
files_cache 50 54 448 6 6 1
signal_act 54 57 1344 18 19 1
size-131072(DMA) 0 0 131072 0 0 32
size-131072 0 0 131072 0 0 32
size-65536(DMA) 0 0 65536 0 0 16
size-65536 0 0 65536 0 0 16
size-32768(DMA) 0 0 32768 0 0 8
size-32768 2 2 32768 2 2 8
size-16384(DMA) 0 0 16384 0 0 4
size-16384 2 6 16384 2 6 4
size-8192(DMA) 0 0 8192 0 0 2
size-8192 3 4 8192 3 4 2
size-4096(DMA) 0 0 4096 0 0 1
size-4096 55 60 4096 55 60 1
size-2048(DMA) 0 0 2048 0 0 1
size-2048 8 50 2048 4 25 1
size-1024(DMA) 0 0 1024 0 0 1
size-1024 36 44 1024 9 11 1
size-512(DMA) 0 0 512 0 0 1
size-512 37 48 512 5 6 1
size-256(DMA) 0 0 256 0 0 1
size-256 41 60 256 3 4 1
size-128(DMA) 3 30 128 1 1 1
size-128 568 720 128 19 24 1
size-64(DMA) 0 0 64 0 0 1
size-64 331 354 64 6 6 1
size-32(DMA) 52 59 64 1 1 1
size-32 1044 1062 64 18 18 1
total: used: free: shared: buffers: cached:
Mem: 1054724096 191864832 862859264 0 5967872 77918208
Swap: 1077501952 0 1077501952
MemTotal: 1030004 kB
MemFree: 842636 kB
MemShared: 0 kB
Buffers: 5828 kB
Cached: 76092 kB
SwapCached: 0 kB
Active: 22164 kB
Inactive: 147532 kB
HighTotal: 131008 kB
HighFree: 2044 kB
LowTotal: 898996 kB
LowFree: 840592 kB
SwapTotal: 1052248 kB
SwapFree: 1052248 kB
slabinfo and meminfo after copying:
slabinfo - version: 1.1
kmem_cache 58 68 112 2 2 1
tcp_tw_bucket 3 30 128 1 1 1
tcp_bind_bucket 16 112 32 1 1 1
tcp_open_request 0 59 64 0 1 1
inet_peer_cache 0 0 64 0 0 1
ip_fib_hash 9 112 32 1 1 1
ip_dst_cache 8 60 192 3 3 1
arp_cache 2 30 128 1 1 1
urb_priv 1 59 64 1 1 1
blkdev_requests 512 540 128 18 18 1
nfs_read_data 0 0 384 0 0 1
nfs_write_data 0 0 384 0 0 1
nfs_page 0 0 128 0 0 1
dnotify cache 0 0 20 0 0 1
file lock cache 3 42 92 1 1 1
fasync cache 1 202 16 1 1 1
uid_cache 2 112 32 1 1 1
skbuff_head_cache 129 140 192 7 7 1
sock 133 144 832 16 16 2
sigqueue 0 29 132 0 1 1
cdev_cache 21 118 64 2 2 1
bdev_cache 5 59 64 1 1 1
mnt_cache 13 59 64 1 1 1
inode_cache 1501 2408 512 344 344 1
dentry_cache 767 3210 128 107 107 1
filp 1262 1290 128 43 43 1
names_cache 0 2 4096 0 2 1
buffer_head 226567 227160 128 7569 7572 1
mm_struct 54 60 192 3 3 1
vm_area_struct 2292 2430 128 77 81 1
fs_cache 53 59 64 1 1 1
files_cache 53 63 448 6 7 1
signal_act 57 63 1344 20 21 1
size-131072(DMA) 0 0 131072 0 0 32
size-131072 0 0 131072 0 0 32
size-65536(DMA) 0 0 65536 0 0 16
size-65536 0 0 65536 0 0 16
size-32768(DMA) 0 0 32768 0 0 8
size-32768 2 2 32768 2 2 8
size-16384(DMA) 0 0 16384 0 0 4
size-16384 2 3 16384 2 3 4
size-8192(DMA) 0 0 8192 0 0 2
size-8192 3 4 8192 3 4 2
size-4096(DMA) 0 0 4096 0 0 1
size-4096 66 67 4096 66 67 1
size-2048(DMA) 0 0 2048 0 0 1
size-2048 8 10 2048 4 5 1
size-1024(DMA) 0 0 1024 0 0 1
size-1024 39 40 1024 10 10 1
size-512(DMA) 0 0 512 0 0 1
size-512 38 48 512 5 6 1
size-256(DMA) 0 0 256 0 0 1
size-256 41 45 256 3 3 1
size-128(DMA) 3 30 128 1 1 1
size-128 568 600 128 19 20 1
size-64(DMA) 0 0 64 0 0 1
size-64 305 354 64 6 6 1
size-32(DMA) 52 59 64 1 1 1
size-32 400 1121 64 19 19 1
total: used: free: shared: buffers: cached:
Mem: 1054724096 1048174592 6549504 0 5664768 929890304
Swap: 1077501952 36356096 1041145856
MemTotal: 1030004 kB
MemFree: 6396 kB
MemShared: 0 kB
Buffers: 5532 kB
Cached: 892792 kB
SwapCached: 15304 kB
Active: 57556 kB
Inactive: 920424 kB
HighTotal: 131008 kB
HighFree: 1064 kB
LowTotal: 898996 kB
LowFree: 5332 kB
SwapTotal: 1052248 kB
SwapFree: 1016744 kB
[-- Attachment #2: Type: text/plain, Size: 34739 bytes --]
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
1 0 0 0 810816 23988 78708 0 0 273 26 264 1109 15 3 82
1 0 0 0 785476 24016 103036 0 0 12244 20 428 1089 4 13 83
1 0 0 0 737044 24080 149840 0 0 23448 284 593 1118 2 25 73
1 0 0 0 681784 24132 203268 0 0 26780 0 621 1000 3 20 77
0 1 0 0 657956 24156 226304 0 0 11528 0 462 1037 1 14 85
1 0 0 0 602336 24208 280076 0 0 26908 0 609 971 1 27 72
0 1 0 0 557012 24252 323896 0 0 21912 0 557 953 1 18 81
1 0 0 0 505608 24380 373512 0 0 24856 604 644 1062 3 28 69
1 0 0 0 453068 24432 424304 0 0 25368 0 604 979 4 34 62
1 0 0 0 398368 24484 477188 0 0 26524 0 605 1150 2 24 74
1 0 0 0 344076 24544 529668 0 0 26268 0 607 980 3 26 71
1 0 0 0 288688 24596 583220 0 0 26780 0 631 989 1 25 74
1 0 0 0 239892 24672 630364 0 0 23572 688 613 1036 1 28 71
1 0 0 0 184104 24728 684296 0 0 27036 0 657 1351 5 31 64
1 0 0 0 132204 24776 734476 0 0 25112 0 604 1251 5 31 64
1 0 0 0 80496 24824 784468 0 0 24984 0 594 1016 3 31 66
1 0 0 0 43120 24860 820600 0 0 18068 5888 640 985 2 15 83
1 0 0 756 5852 21264 860756 0 0 23584 23752 842 1012 1 49 50
1 0 0 2292 5284 4408 878268 0 640 20628 21376 757 990 1 28 71
1 0 0 2292 4144 4392 879388 0 0 24728 24704 860 1201 1 45 54
1 0 0 2292 4312 4396 879220 0 0 26008 25984 893 1113 2 46 52
2 0 0 2292 4428 4396 879104 0 0 24088 24064 841 1093 5 29 66
1 0 0 2292 4096 4568 879264 0 0 22040 22216 812 1062 3 40 57
1 0 0 2292 4340 4572 879016 0 0 26012 26112 899 1086 2 44 54
1 0 0 2292 5024 4492 878412 0 0 24600 24576 875 1192 2 42 56
0 1 0 2292 4536 4488 878904 0 0 25544 25472 884 1096 1 44 55
1 0 0 2292 4536 4484 878908 0 0 25580 25600 875 1081 1 37 62
0 1 0 2292 4404 4716 878808 0 0 23044 23336 848 1101 1 38 61
0 1 0 2292 4388 4716 878820 0 0 24492 24576 860 1077 3 36 61
1 0 0 2292 4320 4684 878928 0 0 25112 25088 879 1188 3 35 62
1 0 0 2292 4984 4700 878236 0 0 23296 23296 829 1057 1 35 64
1 0 0 2292 4768 4700 878452 0 0 17428 17408 658 981 1 24 75
0 1 0 2292 4496 4964 878464 0 0 15888 16360 655 1024 1 19 80
1 0 0 2292 4444 4960 878524 0 0 16784 16640 648 999 1 23 76
1 0 0 2292 5060 4964 877908 0 0 17040 17152 660 1140 3 20 77
1 0 0 2292 4508 4944 878472 0 0 17680 17664 674 977 0 22 78
1 0 0 3956 5044 4928 878540 0 520 13716 14216 587 881 3 21 76
1 0 0 4596 5772 5064 877776 0 172 13840 14252 584 915 2 19 79
0 1 0 4596 4188 5064 879340 0 0 18052 17920 706 1265 1 18 81
0 1 0 4596 4448 5064 879052 32 0 16240 16256 776 2401 5 29 66
1 0 0 4596 4864 5068 878644 0 0 17248 17280 703 1381 2 23 75
2 0 0 4596 4504 5100 878932 0 0 11704 11648 524 935 3 16 81
1 0 0 4596 4112 5264 879164 0 0 13600 13832 593 1061 3 24 73
1 0 0 4596 5060 5088 878400 0 0 17936 17920 677 951 2 22 76
2 0 0 4596 4344 5088 879108 0 0 16788 16768 642 1054 2 22 76
1 0 0 4596 5076 5088 878392 0 0 17552 17536 674 1120 0 21 79
1 0 0 4596 4848 5088 878632 0 0 17552 17536 666 1011 4 24 72
1 0 0 4596 4780 5272 878508 0 0 15504 15824 642 1100 1 20 79
1 0 0 4596 4596 5272 878696 0 0 17040 17024 652 962 4 18 78
2 0 0 4596 4552 5268 878732 0 0 16784 16768 646 1111 3 26 71
1 0 0 4596 4772 5044 878736 0 0 17172 17152 650 938 1 25 74
1 0 0 4596 4796 5044 878712 0 0 17936 17920 670 995 1 26 73
2 0 0 4596 4788 5228 878548 0 0 15504 15828 635 994 2 21 77
1 0 0 4596 4748 5228 878580 0 0 16912 16896 663 956 1 22 77
1 0 0 4596 4336 5228 878996 0 0 17296 17280 658 1107 3 18 79
1 0 0 4596 5060 5216 878276 0 0 16532 16512 638 935 3 20 77
0 1 0 4596 4328 5220 879012 0 0 16788 16768 633 973 2 23 75
1 0 0 4596 4344 5096 879120 0 0 16016 16256 636 966 1 18 81
1 0 0 4596 4656 5096 878808 0 0 16912 16896 637 951 2 21 77
1 0 0 4596 4532 5100 878920 0 0 16400 16384 639 1109 3 24 73
1 0 0 4596 4104 5100 879336 0 0 17296 17280 663 1005 2 25 73
1 0 0 6516 4100 5096 879788 0 380 14484 14844 598 935 1 27 72
1 0 0 7028 4616 5092 879448 0 120 14096 14460 596 940 3 13 84
1 0 0 7028 5292 5088 878816 0 0 16912 16896 661 908 1 24 75
1 0 0 7028 4204 5092 879868 0 0 13584 13696 578 1138 4 25 71
1 0 0 7028 4324 5096 879768 0 0 17040 16896 663 921 1 20 79
1 0 0 7028 4272 5060 879840 0 0 17684 17664 681 1002 3 25 73
0 1 1 7028 4992 4916 879264 0 0 15520 15616 625 912 1 17 82
2 0 0 7028 4480 5124 879580 0 0 10216 10320 515 923 2 15 83
1 0 0 7028 4296 5124 879764 0 0 16940 17024 647 1147 1 27 72
1 0 0 7028 4504 5124 879544 0 0 17808 17792 673 1011 1 18 81
1 0 0 7028 4268 5120 879788 0 0 17040 17024 648 1005 2 16 82
1 0 0 7028 4636 4940 879592 0 0 17300 17280 652 962 1 23 76
1 0 0 7028 4100 5152 879916 0 0 14604 14988 615 953 3 17 80
0 1 0 7028 4748 5152 879280 0 0 17608 17536 670 1003 2 20 78
2 0 0 7028 4256 5152 879764 0 0 16604 16640 641 953 2 18 80
1 1 0 7028 4736 5156 879284 0 0 17168 17152 655 1017 5 20 75
1 0 0 7028 4836 4968 879376 0 0 17040 17024 652 966 1 24 75
1 0 0 7028 4484 5156 879532 0 0 14608 14936 611 1072 3 18 79
1 0 0 7028 4584 5156 879440 0 0 17936 17920 678 997 2 18 80
1 0 0 7028 4804 5140 879232 0 0 17808 17792 673 1015 3 21 76
1 0 0 7028 4884 5144 879148 0 0 16916 16896 643 963 1 19 80
1 0 0 7028 4468 5008 879692 0 0 17552 17536 678 985 4 24 72
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
2 0 0 7028 4108 5148 879912 0 0 14608 14856 592 1071 3 23 74
1 0 0 7028 5032 5152 878984 0 0 16400 16384 639 916 1 18 81
1 0 0 7028 4612 5156 879400 0 0 16656 16640 642 948 2 21 77
0 1 0 8436 5916 5156 878424 0 3372 12816 16172 573 879 2 16 82 1 0 0 10100 5488 5156 879008 0 0 15632 15616 615 953 2 19 79
1 0 0 10100 5288 5152 879212 0 0 14992 15240 646 1210 7 19 74
0 1 0 10100 5144 5156 879336 0 0 16392 16256 732 1565 5 21 74
1 0 0 10100 4760 5156 879720 0 0 12312 12416 533 851 1 17 82
0 2 0 10100 5200 5212 878996 0 0 9916 9856 482 851 3 14 83
1 1 0 10100 4816 5312 879260 0 0 3432 3328 316 610 1 8 91
1 0 0 10100 5072 5584 878888 0 0 4568 4952 380 921 3 9 88
1 0 0 10100 4768 5584 879180 0 0 17172 17152 668 981 2 22 76
0 1 0 10100 4616 5380 879540 0 0 17672 17536 674 1023 3 21 76
1 0 0 10100 4572 5380 879580 0 0 16924 17024 658 1010 4 25 71
1 1 0 10100 4500 5472 879356 0 0 5108 4992 364 852 2 4 94
1 2 0 10100 4276 5824 879236 0 0 3764 3900 367 949 1 9 90
1 1 0 10100 4792 5920 878604 0 0 3432 3328 317 651 1 7 92
0 2 0 10100 4580 6020 878740 0 0 3304 3200 311 680 1 4 95
1 1 0 10100 5232 6124 877976 0 0 3432 3328 316 665 1 5 94
0 2 0 10100 5068 6232 877980 0 0 3308 3200 320 706 4 6 90
1 2 0 10100 4832 6408 877988 0 0 3312 3704 322 848 2 7 91
1 2 0 10100 4524 6508 878124 0 0 3308 2816 306 627 3 4 93
0 2 0 10100 4748 6612 877748 0 0 3308 3200 312 688 1 5 94
0 2 0 10100 4412 6720 877880 0 0 3308 3200 312 754 1 5 94
1 1 0 10100 4256 6824 877884 0 0 3304 3200 314 745 2 7 91
1 2 1 10100 4920 7256 876668 0 0 2884 2968 368 1058 4 7 89
1 2 1 10100 4768 7380 876656 0 0 3840 32168 501 726 1 7 92
1 2 1 10100 4984 7496 876244 0 0 3576 31660 511 840 2 1 97
0 2 1 10100 4244 7676 876764 0 0 5176 27776 537 779 2 5 93
1 1 1 10100 4268 7596 876728 0 0 3976 27776 516 751 1 5 94
1 2 1 10100 4724 7712 876092 0 0 3448 31656 503 924 4 5 91
0 2 0 10100 5028 7844 875592 0 0 3848 7684 400 803 3 8 89
1 1 0 10100 4212 7948 876240 0 0 3564 0 268 704 1 2 97
1 2 0 10100 4980 8056 875352 0 0 3312 0 263 691 4 1 95
1 2 0 10100 4196 8164 875996 0 0 3564 0 267 623 1 6 93
0 2 0 10100 5060 8272 874976 0 0 3440 0 266 900 3 3 94
1 1 1 10100 4304 8528 875368 0 0 4648 27940 484 812 1 7 92
0 2 1 10100 5196 8652 874236 0 0 3976 32276 530 814 4 5 91
0 2 1 10100 4540 8844 874620 0 0 3136 23684 481 765 1 2 97
3 1 1 10100 4316 8972 874684 0 0 4356 23808 497 780 3 6 91
2 1 0 10100 4340 9088 874476 0 0 3704 3564 344 859 4 9 87
1 2 0 10100 4988 9196 873620 0 0 3440 0 268 757 1 6 93
1 1 0 10100 4240 9300 874140 0 0 3564 0 271 795 3 5 92
1 1 0 10100 4916 9404 873252 0 0 3564 0 265 776 1 12 87
1 2 0 10100 4676 9508 873260 0 0 3440 0 265 785 2 4 94
0 2 1 10100 4540 9780 873008 0 0 3208 23832 470 949 3 10 87
0 2 1 10100 4852 9896 872496 0 0 3572 23808 451 796 2 12 86
0 2 0 10100 4568 10036 872508 0 0 3856 22800 513 833 5 7 88
0 2 0 10100 5000 10140 871876 0 0 3436 0 269 748 2 5 93
0 1 0 10100 5016 10236 871888 0 0 3432 0 267 714 1 5 94
0 1 0 10100 4996 10048 872108 0 0 17424 0 462 1100 1 23 76
0 1 0 10100 5028 10048 872076 0 0 17480 0 463 948 1 20 79
2 1 1 10100 4512 10208 872428 0 0 13528 16108 519 933 1 17 82
1 0 0 10100 4504 10208 872436 0 0 14092 1732 461 894 5 18 77
1 0 0 10100 4148 10208 872792 0 0 17300 0 460 929 1 23 76
1 0 0 10100 4228 10092 872816 0 0 17012 0 483 1087 3 25 72
1 0 0 10100 4408 10096 872632 0 0 18580 0 481 978 1 22 77
1 0 0 10100 4144 10096 872900 0 0 19092 0 489 983 1 24 75
0 1 0 10740 7024 10408 870512 0 432 8716 21096 583 863 1 9 90
1 0 0 12020 5048 10408 875912 0 3568 12172 3568 425 807 2 17 81 0 1 0 12404 5416 10272 875724 0 0 18880 0 487 1093 2 24 74
1 0 0 12404 5440 10272 875708 0 0 19048 0 486 979 2 22 76
1 0 0 12404 5336 10272 875804 0 0 18836 0 481 1004 3 18 79
0 1 1 12404 4568 10128 876712 0 0 15120 16368 546 999 3 21 76
1 0 0 12404 4128 10120 877148 0 0 17684 2944 548 929 3 22 75
1 0 0 12404 4852 10120 876424 0 0 15012 0 432 1052 4 22 75
2 0 0 12404 5076 10120 876204 0 0 19344 0 497 1012 2 25 73
1 0 0 12404 4776 9408 877208 0 0 18580 0 483 989 4 21 75
1 0 0 12404 4224 8756 878412 0 0 18964 6272 566 992 2 23 75
0 1 0 12404 4676 8172 878544 0 0 11660 13632 501 860 1 14 85
1 0 0 12404 4528 7608 879256 0 0 16400 14720 633 1050 3 23 74
1 0 0 12404 4244 6948 880204 0 0 17552 17536 681 980 1 34 65
0 1 0 12404 4552 6268 880576 0 0 18480 18432 708 1029 2 30 68
1 0 0 12404 5060 5524 880808 0 0 18164 18176 702 1002 3 29 68
1 0 1 12404 4088 5512 881796 0 0 15760 16296 686 1038 4 23 73
1 0 0 12404 4820 5508 881068 0 0 18064 18048 695 1001 4 27 69
0 1 0 12404 4544 5512 881336 0 0 17044 16896 680 962 4 24 72
1 0 0 12404 4340 5352 881708 0 0 19600 19584 742 1054 1 34 65
1 0 0 12404 4416 5356 881628 0 0 18452 18432 722 984 2 31 67
1 0 0 12404 5040 5480 880868 0 0 16020 16408 657 984 2 26 72
1 0 0 12404 4468 5480 881420 0 0 18320 18304 704 1168 5 29 66
1 0 0 12404 4324 5480 881584 0 0 17940 17920 691 1002 1 29 70
1 0 0 13812 5772 5176 881472 0 0 16668 16640 653 950 4 26 70
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
2 0 0 14324 4884 5184 885932 0 2304 15888 18176 653 911 2 20 78 0 2 0 14964 5356 5536 885060 0 252 9012 9780 528 934 1 14 85
0 2 0 14964 5344 5648 884936 0 0 3440 3328 316 791 1 7 92
1 0 0 14964 5652 5700 884700 32 0 7648 7552 426 3299 9 15 76
1 0 0 14964 4816 5700 885512 32 0 17204 17024 670 967 4 23 73
1 0 0 14964 5112 5724 885192 0 0 19900 19968 764 1030 2 37 61
0 1 0 14964 4680 5936 885388 0 0 17084 17816 771 1129 2 31 67
1 0 0 14964 4568 5916 885516 0 0 25108 25088 861 1210 4 39 57
1 0 0 14964 4624 5916 885376 0 0 23196 23040 818 3592 11 31 58
1 0 0 14964 5076 5916 884904 24 0 23856 23936 840 1244 3 31 66
1 0 0 14964 4876 5768 885248 0 0 23704 23552 825 1043 3 39 58
0 1 0 14964 4800 5948 885144 0 0 20376 20832 776 945 2 36 62
1 0 0 14964 4104 5952 885840 0 0 19860 19712 730 970 1 32 67
1 0 0 14964 4900 5952 885040 0 0 24216 24320 843 1164 3 39 58
0 1 0 14964 4552 5644 885688 0 0 26268 26240 913 1236 2 35 63
1 0 0 14964 4756 5648 885480 0 0 24984 24960 879 1289 2 53 45
0 1 1 14964 4240 5832 885808 0 0 22300 22332 814 1245 1 30 69
1 0 0 14964 4836 5824 885212 0 0 20500 20764 785 1367 4 42 54
1 0 0 14964 4556 5676 885620 0 0 18708 18688 749 1432 6 32 62
1 0 0 14964 5116 5676 885064 0 0 25240 25216 875 1364 3 44 53
1 0 0 16372 4284 5664 885968 0 204 21840 21964 804 1435 6 33 61
0 1 1 17652 5272 5828 887356 0 2340 17884 20304 713 1330 4 29 67 1 0 0 17652 5064 5316 888064 32 0 19636 19860 783 1557 11 35 54
1 0 0 17652 5348 5308 887812 0 0 18580 18560 712 1362 8 31 61
1 0 0 17652 4800 5308 888324 32 0 22596 22528 820 1279 6 35 59
1 0 0 17652 5128 4816 888480 0 0 25752 25728 895 1058 1 41 58
0 1 0 17652 4240 4816 889364 0 0 25296 25216 875 1072 2 39 59
1 0 0 17652 4628 5032 888764 0 0 15100 15464 654 1013 3 19 78
2 0 0 17652 4124 5036 889276 0 0 23836 23808 856 1079 4 35 61
0 1 0 17652 4792 5032 888616 0 0 25936 25984 892 1120 2 46 52
1 0 0 17652 4788 4852 888792 0 0 23528 23552 847 1170 2 35 63
1 0 0 17652 4220 4852 889352 0 0 24856 24832 874 1167 4 29 67
0 1 0 17652 4128 5028 889272 0 0 20764 20952 807 1148 4 31 65
1 1 0 17652 4224 5028 889156 32 0 24376 24320 847 1040 3 42 55
1 0 0 17652 4212 5044 889140 0 0 20344 20352 756 1017 3 39 58
0 1 0 17652 4144 4948 889304 0 0 17940 17792 683 979 1 23 76
1 0 0 17652 4912 4860 888628 0 0 18704 18688 700 996 1 31 68
1 0 0 17652 4956 5040 888400 0 0 15248 15712 651 1071 3 21 76
1 0 0 17652 4528 5044 888824 0 0 18964 18944 704 1033 3 29 68
1 0 0 17652 4236 5040 889120 0 0 19216 19200 709 992 1 32 67
1 0 0 18804 5356 5044 888268 0 352 13588 13920 567 879 2 21 77
0 2 0 19700 4588 5068 889812 92 32 12032 11936 526 833 2 15 83
0 1 0 19828 4924 5040 890936 0 1324 15472 16940 657 1053 1 18 81 1 0 0 19828 4348 5040 891532 0 0 17200 17280 657 981 2 20 78
1 0 0 19828 4956 5036 890892 32 0 18224 18176 678 989 3 26 71
1 0 0 19828 5276 5020 890628 0 0 18836 18816 705 985 2 31 67
2 0 0 19828 5120 5020 890780 0 0 11916 11904 514 788 2 16 82
1 0 0 19828 4772 5204 890940 0 0 15376 15720 656 997 3 21 76
1 0 0 19828 4176 5200 891536 0 0 18960 18944 699 1016 1 21 78
1 0 0 19828 4096 4988 891832 0 0 18836 18816 697 1020 1 22 77
1 0 0 19828 4256 4984 891656 0 0 18836 18816 702 983 1 23 76
1 0 0 19828 4240 4980 891696 0 0 19472 19456 712 1006 2 34 64
0 1 0 19828 4676 5172 891044 0 0 16612 16876 672 1001 4 21 75
1 0 0 19828 4332 5172 891388 0 0 16576 16512 645 961 6 21 73
2 0 0 19828 4584 5168 891160 0 0 19856 19968 743 1031 2 23 75
1 0 0 19828 4804 4996 891112 0 0 18324 18304 694 998 3 25 72
1 0 0 19828 4856 4996 891056 0 0 18964 18944 703 987 1 30 69
1 0 0 19828 4892 5148 890900 0 0 16272 16552 648 1142 3 26 71
0 1 0 19828 4552 5148 891240 0 0 15516 15488 613 918 4 29 67
1 0 0 19828 4360 5148 891432 0 0 19600 19456 722 1024 2 28 70
1 0 0 19828 4592 5148 891200 0 0 18452 18560 691 992 1 21 78
1 0 0 19828 4296 4968 891676 0 0 19092 19072 708 987 1 20 79
1 0 0 19828 4368 5196 891372 0 0 15260 15672 648 1253 3 19 78
1 0 0 19828 4244 5200 891488 0 0 17428 17408 670 1027 1 20 79
0 1 1 20724 6168 5248 889420 0 88 13480 13528 586 1301 4 16 80
0 2 0 22004 5952 5272 889916 0 3044 8868 11876 511 851 6 17 77 1 2 0 22004 5116 5424 890620 0 0 4760 4608 370 703 2 9 89
1 1 0 22004 5396 5544 890364 0 0 10428 10776 536 3510 11 12 77
1 0 0 22260 6128 5548 890964 0 0 24728 24704 882 1052 3 36 61
1 0 0 22260 5344 5548 891756 0 0 25240 25216 887 1044 2 39 59
1 0 0 22260 5568 5544 891532 0 0 25496 25472 886 1052 1 37 62
2 0 0 22260 4788 5360 892484 0 0 24088 24064 846 1020 3 47 50
1 0 0 22260 4280 5536 892820 0 0 20116 20424 752 1164 9 35 56
1 0 0 22260 4872 5532 892236 0 0 25880 25856 910 1060 2 37 61
0 1 0 22260 4816 5536 892284 0 0 24604 24704 879 1054 2 41 57
1 0 0 22260 4292 5344 892996 0 0 24984 24832 876 1043 1 36 63
1 0 0 22260 4840 5344 892448 0 0 25368 25472 892 1057 2 37 61
2 0 0 22260 4380 5556 892704 0 0 21140 21408 792 1226 3 37 60
2 0 0 22260 4372 5560 892700 0 0 22680 22656 808 1027 6 28 66
1 0 0 22260 4228 5396 893012 0 0 24604 24704 858 1037 1 42 57
2 0 0 22260 4572 5392 892672 0 0 20372 20352 760 962 3 35 62
1 0 0 22260 5080 5392 892164 0 0 24984 24832 883 1026 1 44 55
1 0 0 22260 4348 5564 892724 0 0 16400 16728 680 1072 1 30 69
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
1 1 0 22260 4764 5336 892532 0 0 21928 22016 793 1031 3 36 61
1 0 0 23028 5608 5292 891816 0 0 22028 21888 807 1031 3 37 60
1 0 0 24820 6192 5092 894428 0 2552 13708 16376 625 824 3 24 73 1 0 0 25076 5308 4844 895516 32 0 23608 23424 824 1000 2 38 60
1 0 0 25076 4976 5016 895672 0 0 22176 22592 818 1161 2 39 59
1 0 0 25076 5756 5016 894896 0 0 24856 24704 866 1058 2 38 60
1 0 0 25076 5132 5016 895528 0 0 23576 23552 845 1069 2 42 56
2 1 0 25076 5680 5032 894920 32 0 22088 22144 805 1072 3 40 57
1 0 0 25076 4644 4876 896096 0 0 16848 16640 661 1461 6 24 70
1 0 0 25076 4388 5308 895916 0 0 19992 20728 829 1878 4 36 60
1 0 0 25076 5048 5308 895256 0 0 24856 24960 867 1115 1 39 60
1 0 0 25076 4344 5308 895964 0 0 23960 23808 858 1106 2 44 54
1 0 0 25076 4964 5324 895324 0 0 21976 21888 803 1035 0 40 60
1 0 0 25076 4380 5108 896120 0 0 24856 24832 860 1050 2 33 65
1 0 0 25076 4668 5344 895592 0 0 19080 19800 743 1194 2 36 62
1 0 0 25076 4248 5344 896016 0 0 15316 15232 647 963 2 23 75
1 0 0 25076 4832 5336 895440 0 0 18192 18048 741 1203 2 37 61
1 0 0 25076 4424 5340 895844 0 0 18324 18304 705 954 1 30 69
1 0 0 25076 4340 5344 895916 0 0 18964 18944 734 1072 4 21 75
0 1 0 25076 4332 5328 895916 28 0 16300 16584 671 1127 3 34 63
1 0 0 25076 4848 5324 895408 0 0 17936 17920 694 1046 3 27 70
1 0 0 25076 5076 5324 895180 0 0 18580 18560 710 998 3 22 75
1 0 0 26996 5748 5308 895380 0 524 15888 16524 686 1138 4 24 72
1 0 0 26996 5856 5312 896820 0 2432 11660 13952 531 818 2 23 75 1 0 0 27252 5616 5448 896960 0 0 16400 16776 666 1088 1 28 71
1 0 0 27252 5988 5452 896588 0 0 17480 17408 678 1005 2 21 77
0 1 0 27252 5456 5284 897284 0 0 19316 19200 720 1057 2 24 74
1 0 0 27252 5392 5276 897348 0 0 18992 19072 710 1017 1 29 70
1 1 0 27252 5476 5280 897248 0 0 17424 17408 680 1060 1 21 78
1 0 0 27252 6024 5480 896544 0 0 14736 15128 636 1087 4 22 74
1 0 0 27252 5468 5456 897124 0 0 19096 19072 735 1018 1 31 68
0 1 0 27252 5060 5456 897484 0 0 19092 18944 728 1038 2 22 76
1 0 0 27252 5180 5016 897820 0 0 19472 19456 771 1210 1 29 70
1 0 0 27252 4796 5020 898184 0 0 18068 18176 697 1072 4 26 70
1 1 0 27252 5020 5244 897748 0 0 18744 19080 738 1076 1 28 71
2 0 0 27252 4576 5228 898208 0 0 23832 23680 840 1016 3 33 64
0 1 0 27252 4268 5228 898524 0 0 24236 24320 867 1048 4 42 54
1 0 0 27252 4864 5000 898152 0 0 25348 25216 870 1067 1 35 64
1 0 0 27252 4320 4992 898700 0 0 22804 22912 812 1214 6 34 60
2 0 0 27252 4208 5192 898612 0 0 21528 21904 797 1057 2 41 57
0 1 0 27252 4476 5188 898352 0 0 23516 23424 816 1049 1 43 56
1 0 0 27252 5104 5032 897908 0 0 24660 24704 853 1088 1 37 62
1 0 0 27252 4932 5028 898084 0 0 23956 23936 840 1040 2 43 55
1 0 0 29172 5696 5028 897964 0 156 18456 18588 711 1232 6 35 59
1 0 0 29172 5500 5204 900280 0 1128 8072 9448 461 749 2 14 84 1 0 0 29428 5368 5068 900512 32 4 22968 23044 814 1011 2 35 63
1 0 0 29428 5328 5068 900548 0 0 24856 24704 864 1047 1 43 56
1 0 0 29428 5224 5072 900624 0 0 24604 24704 849 1041 1 41 58
1 0 0 29428 4396 5072 901460 0 0 22936 22912 809 1189 3 40 57
1 0 0 29428 4132 5044 901740 0 0 21908 22224 799 1002 2 38 60
1 0 0 29428 5240 5040 900644 0 0 23320 23296 821 1015 3 38 59
1 0 0 29428 5076 5040 900808 0 0 24856 24832 868 1043 1 46 53
0 1 0 29428 5112 5040 900772 0 0 24604 24448 847 1053 1 37 62
5 0 0 29428 4896 4824 901196 0 0 23448 23552 839 1338 2 35 63
1 0 0 29428 4392 5072 901448 0 0 22292 22616 913 1871 2 42 56
0 2 0 29428 4608 5084 901208 0 0 21384 21376 768 1056 3 38 59
2 0 0 29428 4836 5084 900992 0 0 22448 22528 804 1028 3 35 62
1 0 0 29428 4556 5088 901272 0 0 23704 23680 826 1040 2 42 56
1 0 0 29428 4196 4896 901820 0 0 24216 24064 842 1178 2 46 52
0 1 0 29428 4288 5084 901536 0 0 18984 19408 722 948 1 29 70
0 1 0 29428 4572 5104 901232 0 0 21916 21888 801 1088 3 37 60
1 0 0 29428 4356 5104 901448 0 0 18580 18560 697 1008 1 23 76
1 0 0 29428 4496 5096 901312 0 0 18832 18816 707 1036 2 25 73
2 0 0 30196 4948 4936 902228 0 1040 15380 16400 620 1073 6 20 74 1 0 0 30580 5772 5112 901280 0 0 14484 14688 599 903 1 16 83
1 0 0 30580 5884 5112 901168 0 0 18940 18944 709 1015 2 22 76
1 0 0 30580 6136 5116 900912 0 0 19112 19200 704 965 1 22 77
1 0 0 30580 4696 5108 902332 0 0 18200 18176 682 978 1 28 71
1 0 0 30580 4176 5112 902840 0 0 18960 18816 703 1141 2 21 77
2 0 0 30580 4240 5288 902600 0 0 12072 12448 546 893 5 16 79
1 0 0 30580 4852 5116 902172 0 0 18708 18688 688 991 1 27 72
1 0 0 30580 4216 5112 902812 0 0 18708 18688 704 990 1 18 81
1 0 0 30580 4316 5112 902704 0 0 19856 19840 721 1005 1 27 72
1 0 0 30580 4232 5116 902796 0 0 18580 18560 694 1153 1 29 70
1 0 0 30580 4828 5240 902076 0 0 17808 18064 684 1030 2 24 74
1 0 0 30580 4756 5248 902108 24 0 18092 18048 682 1024 2 24 74
0 1 0 30580 4268 4996 902848 0 0 18512 18432 692 991 3 28 69
1 0 0 30580 4828 4992 902296 0 0 17624 17664 667 1008 4 26 70
1 0 0 30580 5016 4988 902112 0 0 18844 18816 698 1106 1 23 76
0 1 0 30580 4856 5404 901856 0 0 11648 12376 590 980 2 11 87
1 0 0 30580 4812 5408 901896 0 0 18836 18816 702 1003 1 21 78
1 0 0 30580 4328 5404 902384 0 0 18964 18944 714 1037 2 25 74
0 1 0 30580 4380 5216 902516 0 0 19436 19328 715 1055 1 27 72
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
3 0 0 30580 4660 5216 902232 0 0 17976 18048 675 1161 6 24 70
1 0 0 30580 4348 5380 902384 0 0 16660 16840 664 978 2 29 69
1 0 0 30580 4744 5376 901900 0 0 19220 19328 707 1004 1 18 81
0 1 0 31860 6256 5376 901364 0 2144 15508 17632 652 947 2 19 79 1 0 0 32244 5200 5380 902560 0 0 18064 17920 686 1115 2 22 76
1 0 0 32500 4676 5200 903268 0 0 16784 16896 645 1107 3 28 69
1 0 0 32500 4908 5444 902792 0 0 14244 14760 628 1121 2 26 72
1 0 0 32500 5400 5444 902316 0 0 18704 18688 740 1185 4 28 68
1 0 0 32500 5520 5444 902192 0 0 18068 17920 708 1138 4 26 70
0 1 0 32500 5640 5448 902076 0 0 19220 19200 739 1214 3 28 69
1 0 0 32500 5092 5272 902784 0 0 18192 18176 716 1196 3 30 67
1 0 0 32500 5888 5640 901612 0 0 17176 17920 747 1242 1 33 66
1 0 0 32500 5252 5644 902236 0 0 19092 18944 773 1566 2 30 68
1 0 0 32500 4764 5644 902708 0 0 18836 18944 778 1502 2 32 66
0 1 0 32500 5232 5644 902136 92 0 16108 15872 682 1329 7 20 73
0 2 0 32500 4360 5504 902896 248 0 16112 15872 675 1238 1 27 72
1 0 0 32500 4384 5644 902532 232 0 12556 12568 575 1046 2 19 79
1 0 0 32500 4672 5664 902224 0 0 16916 16896 709 1098 1 25 74
1 0 0 32500 4800 5660 902100 0 0 18964 18944 736 1140 2 17 81
1 0 0 32500 5016 5656 901888 0 0 20116 20096 763 1216 2 26 72
0 1 0 32500 4184 5248 903124 0 0 19224 19200 736 1270 3 31 66
1 0 0 32500 4464 5608 902476 0 0 17168 17816 788 1271 7 23 70
1 0 0 32500 4280 5604 902668 0 0 18832 18816 739 1082 3 28 69
1 0 0 32500 4240 5604 902708 0 0 19988 19968 766 1154 2 31 67
0 2 0 32500 5096 5612 901340 0 0 18720 18688 718 1090 1 30 69
1 2 0 32500 4688 5660 901940 0 0 3716 3584 325 798 3 7 90
0 2 1 32500 4708 5700 901716 0 0 3780 3584 323 671 2 6 92
1 2 0 32500 5016 5892 900936 28 0 2880 3160 360 917 5 2 93
1 0 0 32500 4844 5892 900540 0 0 17336 17152 692 1320 8 27 65
1 0 0 32500 4596 5892 900792 0 0 17424 17408 694 1098 2 23 75
0 1 0 33908 2572 5900 903636 32 484 14264 14820 636 983 7 23 70
1 0 0 33908 4736 5900 901484 0 444 13456 13884 560 843 2 21 77
1 0 0 34164 4636 6232 901260 0 280 11672 12500 581 946 1 17 82
0 1 0 34164 4660 5996 901480 0 0 19860 19840 754 929 1 23 76
0 1 0 34164 4472 5996 901676 0 0 19956 19968 759 917 2 31 67
1 0 0 34164 4600 5992 901544 0 0 20016 19968 757 1082 3 33 64
1 0 0 34164 4196 5992 901932 0 0 20116 20096 757 954 1 27 72
1 0 0 34164 3928 6532 901664 0 0 15248 16216 746 1094 1 21 78
1 0 0 34164 4760 6168 901212 0 0 20116 20096 789 1067 4 20 76
1 0 0 34164 3920 6172 902004 32 0 19764 19584 764 1089 3 27 70
0 1 0 34164 3840 6176 902132 0 0 18228 18432 774 1213 4 28 68
1 0 0 34164 3952 6172 902116 32 0 17936 17792 705 981 6 30 64
1 0 0 34164 4268 6456 901644 0 0 17680 18456 810 1375 2 25 73
1 0 0 34164 4060 6444 901856 0 0 19608 19584 761 1092 1 33 66
1 0 0 34164 4544 6440 901372 0 0 20500 20480 802 1260 3 30 67
0 1 0 34164 4516 6448 901392 0 0 19220 19200 798 1549 2 26 72
1 0 0 34164 4312 6468 901580 0 0 16552 16512 665 939 1 19 80
0 1 0 34164 4452 6300 901612 0 0 18228 18672 742 988 1 39 60
2 0 0 34164 3996 6300 902056 0 0 12908 12800 654 1532 11 24 65
1 0 0 34164 3904 6300 902148 0 0 19988 19968 837 1296 7 30 63
1 0 0 34164 4008 6296 902056 0 0 15504 15616 636 1026 1 22 77
1 0 0 34164 4060 5828 902484 0 0 19348 19200 820 1915 2 33 65
0 1 0 34164 3956 5972 902448 0 0 13460 13704 642 1411 4 16 80
2 0 0 34164 4120 5972 902316 0 0 18584 18560 753 1497 3 29 68
1 0 0 35188 5380 5968 902272 0 1600 16536 18240 764 1790 4 23 74 1 0 0 35444 5080 5640 903272 0 0 19988 19968 793 1593 2 29 69
1 0 0 35444 4380 5636 903964 0 0 20116 20096 821 1592 1 28 71
1 2 1 35444 5352 5684 902728 136 0 12356 12172 554 1293 1 20 79
1 0 0 35444 5640 5868 902116 88 0 14956 15204 653 1039 2 30 68
1 0 0 35444 5412 5872 902348 0 0 19352 19200 838 1532 9 35 56
2 0 0 35444 6068 5892 901672 0 0 15972 16000 755 1594 13 17 70
2 0 0 35444 6032 5340 902172 96 0 18288 18176 781 1381 5 25 70
1 0 0 35444 5212 5340 902984 0 0 19092 19072 743 1167 4 24 72
1 0 0 35444 6008 5572 901984 0 0 17168 17472 706 1006 2 39 59
1 0 0 35444 5460 5572 902524 0 0 20756 20864 792 1245 2 33 65
1 0 0 35444 4832 5572 903180 0 0 19988 19968 784 1444 2 37 61
1 1 0 35444 5292 5572 902104 588 0 11096 10496 515 1328 3 19 78
1 0 0 35444 4720 5148 902672 424 0 10544 10112 499 801 5 13 82
1 0 0 35444 4604 5408 902508 0 0 17556 18040 735 1222 2 23 75
1 0 0 35444 4556 5408 902568 0 0 19476 19456 759 1422 6 31 63
1 0 0 35444 4340 5416 902068 732 0 6892 6144 427 1876 15 14 71
1 2 0 35444 4660 5392 901284 392 0 12948 12544 610 1825 10 21 69
0 2 0 35444 5152 5388 900680 92 0 17900 17792 729 1691 6 24 70
1 1 0 35444 4736 5520 899916 1052 0 4128 3320 418 1546 3 3 94 0 2 0 35444 4856 5520 898900 852 0 4184 3328 356 1060 5 5 90
0 2 0 35444 4768 5328 898204 968 0 4428 3456 400 1961 5 4 91
0 2 0 35444 5076 5328 897092 812 0 4144 3200 398 1728 7 6 87
0 2 0 35444 5032 5324 896300 844 0 4428 3712 415 1250 1 4 95
0 2 0 35444 4408 5440 895924 888 0 3580 2912 378 1594 6 5 89
0 3 0 35444 4652 5440 894652 1016 0 4348 3328 357 734 7 4 89 0 1 0 35444 4944 5444 893616 740 0 7116 6272 402 640 14 10 76
1 0 0 35444 4884 5440 893680 0 0 19504 19456 834 1910 16 23 61
0 1 0 35444 4792 5444 893772 0 0 16968 17024 665 1017 2 27 71
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
2 0 0 35444 5004 5640 893368 0 0 16864 17272 725 1136 2 22 76
0 1 0 35444 4828 5496 893680 0 0 19732 19712 754 1117 5 35 60
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-15 8:38 janne
@ 2001-11-15 9:05 ` janne
2001-11-15 17:44 ` Mike Galbraith
0 siblings, 1 reply; 30+ messages in thread
From: janne @ 2001-11-15 9:05 UTC (permalink / raw)
To: linux-kernel
to followup to my previous post: i don't know how the current code
works, but perhaps there should be some logic added to check the
percentage of total mem used for cache before swapping out.
like, if memory is full and there's less than 10% of total mem used for
cache, then start swapping out. not if ~90% is already used for cache.. :)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-15 9:05 ` janne
@ 2001-11-15 17:44 ` Mike Galbraith
2001-11-16 0:14 ` janne
0 siblings, 1 reply; 30+ messages in thread
From: Mike Galbraith @ 2001-11-15 17:44 UTC (permalink / raw)
To: janne; +Cc: linux-kernel
On Thu, 15 Nov 2001, janne wrote:
> to followup to my previous post: i don't know how the current code
> works, but perhaps there should be some logic added to check the
> percentage of total mem used for cache before swapping out.
No.
> like, if memory is full and there's less than 10% of total mem used for
> cache, then start swapping out. not if ~90% is already used for cache.. :)
Numbers like this don't work. You may have a very large and very hot
cache.. you may also have a very large and hot gob of anonymous pages.
-Mike
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-15 17:44 ` Mike Galbraith
@ 2001-11-16 0:14 ` janne
0 siblings, 0 replies; 30+ messages in thread
From: janne @ 2001-11-16 0:14 UTC (permalink / raw)
To: linux-kernel; +Cc: Mike Galbraith
> > like, if memory is full and there's less than 10% of total mem used for
> > cache, then start swapping out. not if ~90% is already used for cache.. :)
>
> Numbers like this don't work. You may have a very large and very hot
> cache.. you may also have a very large and hot gob of anonymous pages.
yes of course, sorry if i was not clear. it wasn't meant to be an
implementation suggestion since i know there's a lot more to consider,
and even then those figures might not be feasible. i was just trying to
highlight the particular problem i was having..
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-14 16:34 ` Linus Torvalds
@ 2001-11-19 18:01 ` Sebastian Dröge
2001-11-19 18:18 ` Simon Kirby
1 sibling, 0 replies; 30+ messages in thread
From: Sebastian Dröge @ 2001-11-19 18:01 UTC (permalink / raw)
To: linux-kernel, Linus Torvalds
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
I couldn't answer ealier because I had some problems with my ISP
the heavy swapping problem while burning a cd is solved in pre6aa1
but if you want i can do some statistics tommorow
Thanks and bye
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE7+UjjvIHrJes3kVIRApxEAKCwoOhYcptcm/1Q2teIY2YkVwNZGwCeNsDR
pSi5RbK5o5qeYUWzHHYgAj0=
=1Nvc
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
[not found] <200111191801.fAJI1l922388@neosilicon.transmeta.com>
@ 2001-11-19 18:07 ` Linus Torvalds
2001-11-19 18:31 ` Ken Brownfield
2001-11-19 19:44 ` Slo Mo Snail
0 siblings, 2 replies; 30+ messages in thread
From: Linus Torvalds @ 2001-11-19 18:07 UTC (permalink / raw)
To: Sebastian Dröge; +Cc: linux-kernel
On Mon, 19 Nov 2001, Sebastian Dröge wrote:
> Hi,
> I couldn't answer ealier because I had some problems with my ISP
> the heavy swapping problem while burning a cd is solved in pre6aa1
> but if you want i can do some statistics tommorow
Well, pre6aa1 performs really badly exactly because it by default doesn't
swap enough even on _normal_ loads because Andrea is playing with some
tuning (and see the bad results of that tuning in the VM testing by
rwhron@earthlink.net).
So the pre6aa1 numbers are kind of suspect - lack of swapping may not be
due to fixing the problem, but due to bad tuning.
Does plain pre6 solve it? Plain pre6 has a fix where a locked shared
memory area would previously cause unnecessary swapping, and maybe the CD
burning buffer is using shmlock..
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-14 16:34 ` Linus Torvalds
2001-11-19 18:01 ` Sebastian Dröge
@ 2001-11-19 18:18 ` Simon Kirby
1 sibling, 0 replies; 30+ messages in thread
From: Simon Kirby @ 2001-11-19 18:18 UTC (permalink / raw)
To: linux-kernel
On Wed, Nov 14, 2001 at 08:34:12AM -0800, Linus Torvalds wrote:
> That's normal and usually good. It's supposed to swap stuff out if it
> really isn't needed, and that improves performance. Cache _is_ more
> important than swap if the cache is active.
We have to remember that swap can be much slower to read back in than
rereading data from files, though. I guess this is because files tend to
be more often read sequentially. A freshly-booted box loads up things it
hasn't seen before much faster than a heavily-swapped-out box swaps the
things it needs back in...window managers and X desktop backgrounds, for
example, are awfully slow. I would prefer if it never swapped them out.
This is an annoying situation, though, because I would like some of my
unused daemons to be swapped out. mlocking random stuff would be worse,
though.
> HOWEVER, there's probably something in your system that triggers this too
> easily. Heavy NFS usage will do that, for example - as mentioned in
> another thread on linux-kernel, the VM doesn't really understand
> writebacks and asynchronous reads from filesystems that don't use buffers,
> and so sometimes the heuristics get confused simply because NFS activity
> can _look_ like page mapping to the VM.
I've been copying about 40 GB of stuff back and forth over NFS over
switched 100Mbit Ethernet lately, so I can say I'm definitely seeing
this. :) It also seems to happen when I "pull" over NFS rather than
"push" (eg: I ssh to a remote machine and "cp" with the source being an
NFS mount of the local machine)...the 2.4.15pre1 local machine tends to
swap out while this happens as well.
Simon-
[ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
[ sim@stormix.com ][ sim@netnation.com ]
[ Opinions expressed are not necessarily those of my employers. ]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 19:30 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Ken Brownfield
@ 2001-11-19 18:26 ` Marcelo Tosatti
0 siblings, 0 replies; 30+ messages in thread
From: Marcelo Tosatti @ 2001-11-19 18:26 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel
On Mon, 19 Nov 2001, Ken Brownfield wrote:
> Actually, I spoke too soon. We developed a quick stress test that
> causes the problem immediately:
>
> 11:18am up 3 days, 1:36, 3 users, load average: 8.72, 7.18, 3.96
> 91 processes: 85 sleeping, 6 running, 0 zombie, 0 stopped
> CPU states: 0.1% user, 93.4% system, 0.0% nice, 6.4% idle
> Mem: 3343688K av, 3340784K used, 2904K free, 0K shrd, 308K buff
> Swap: 1004052K av, 567404K used, 436648K free 2994288K cached
>
> PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
> 12102 oracle 13 0 16320 15M 14868 R 5584 67.2 0.4 18:58 oracle
> 12365 oracle 18 5 39352 38M 37796 R N 30M 66.7 1.1 4:14 oracle
> 12353 oracle 18 5 39956 38M 38408 R N 31M 66.5 1.1 9:14 oracle
> 12191 root 13 0 892 852 672 R 0 66.4 0.0 6:09 top
> 12366 oracle 9 0 892 892 672 S 0 60.0 0.0 3:20 top
> 9 root 9 0 0 0 0 SW 0 49.0 0.0 9:27 kswapd
> 11 root 9 0 0 0 0 SW 0 38.3 0.0 3:58 kupdated
> 105 root 9 0 0 0 0 SW 0 28.8 0.0 4:56 kjournald
> 470 root 9 0 844 828 472 S 0 28.1 0.0 1:46 gamdrvd
> 12351 oracle 13 5 39956 38M 38408 S N 31M 25.6 1.1 3:08 oracle
> 669 oracle 9 0 4780 4780 4384 S 492 24.4 0.1 1:42 oracle
> 1 root 14 0 476 424 408 R 0 21.6 0.0 1:19 init
> 2 root 14 0 0 0 0 RW 0 20.8 0.0 1:29 keventd
> 615 oracle 9 0 8984 8984 8460 S 4380 16.3 0.2 2:41 oracle
> 388 root 9 0 732 728 592 S 0 11.5 0.0 0:17 syslogd
>
> kswapd bounces up and down from 99%.
Ken,
Could you please check _where_ kswapd is spending its time ?
(you can use kernel profiling and the "readprofile" tool to report us the
functions which are wasting more CPU cycles in the kernel)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 18:07 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Linus Torvalds
@ 2001-11-19 18:31 ` Ken Brownfield
2001-11-19 19:23 ` Linus Torvalds
2001-11-19 19:30 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Ken Brownfield
2001-11-19 19:44 ` Slo Mo Snail
1 sibling, 2 replies; 30+ messages in thread
From: Ken Brownfield @ 2001-11-19 18:31 UTC (permalink / raw)
To: linux-kernel
Linus, so far 2.4.15-pre4 with your patch does not reproduce the kswapd
issue with Oracle, but I do need to perform more deterministic tests
before I can fully sign off on that.
BTW, didn't your patch go into -pre5? Or is there an additional mod in
-pre6 that we should try?
--
Ken.
brownfld@irridia.com
On Mon, Nov 19, 2001 at 10:07:58AM -0800, Linus Torvalds wrote:
|
| On Mon, 19 Nov 2001, Sebastian Dröge wrote:
| > Hi,
| > I couldn't answer ealier because I had some problems with my ISP
| > the heavy swapping problem while burning a cd is solved in pre6aa1
| > but if you want i can do some statistics tommorow
|
| Well, pre6aa1 performs really badly exactly because it by default doesn't
| swap enough even on _normal_ loads because Andrea is playing with some
| tuning (and see the bad results of that tuning in the VM testing by
| rwhron@earthlink.net).
|
| So the pre6aa1 numbers are kind of suspect - lack of swapping may not be
| due to fixing the problem, but due to bad tuning.
|
| Does plain pre6 solve it? Plain pre6 has a fix where a locked shared
| memory area would previously cause unnecessary swapping, and maybe the CD
| burning buffer is using shmlock..
|
| Linus
|
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 18:31 ` Ken Brownfield
@ 2001-11-19 19:23 ` Linus Torvalds
2001-11-19 23:39 ` Ken Brownfield
2001-11-19 19:30 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Ken Brownfield
1 sibling, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2001-11-19 19:23 UTC (permalink / raw)
To: linux-kernel
In article <20011119123125.B1439@asooo.flowerfire.com>,
Ken Brownfield <brownfld@irridia.com> wrote:
>Linus, so far 2.4.15-pre4 with your patch does not reproduce the kswapd
>issue with Oracle, but I do need to perform more deterministic tests
>before I can fully sign off on that.
>
>BTW, didn't your patch go into -pre5? Or is there an additional mod in
>-pre6 that we should try?
You're right, it's probably in pre5 already..
Anyway, it would be interesting to see if the patch by Andrea (I think
he called it "zone-watermarks") that changes the zone allocators to take
other zones into account makes a difference. See separate thread with
the subject line "15pre6aa1 (fixes google VM problem)".
(I think the patch is overly complex as-is, but I htink the _ideas_ in
it are fine).
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 18:31 ` Ken Brownfield
2001-11-19 19:23 ` Linus Torvalds
@ 2001-11-19 19:30 ` Ken Brownfield
2001-11-19 18:26 ` Marcelo Tosatti
1 sibling, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-11-19 19:30 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel
Actually, I spoke too soon. We developed a quick stress test that
causes the problem immediately:
11:18am up 3 days, 1:36, 3 users, load average: 8.72, 7.18, 3.96
91 processes: 85 sleeping, 6 running, 0 zombie, 0 stopped
CPU states: 0.1% user, 93.4% system, 0.0% nice, 6.4% idle
Mem: 3343688K av, 3340784K used, 2904K free, 0K shrd, 308K buff
Swap: 1004052K av, 567404K used, 436648K free 2994288K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
12102 oracle 13 0 16320 15M 14868 R 5584 67.2 0.4 18:58 oracle
12365 oracle 18 5 39352 38M 37796 R N 30M 66.7 1.1 4:14 oracle
12353 oracle 18 5 39956 38M 38408 R N 31M 66.5 1.1 9:14 oracle
12191 root 13 0 892 852 672 R 0 66.4 0.0 6:09 top
12366 oracle 9 0 892 892 672 S 0 60.0 0.0 3:20 top
9 root 9 0 0 0 0 SW 0 49.0 0.0 9:27 kswapd
11 root 9 0 0 0 0 SW 0 38.3 0.0 3:58 kupdated
105 root 9 0 0 0 0 SW 0 28.8 0.0 4:56 kjournald
470 root 9 0 844 828 472 S 0 28.1 0.0 1:46 gamdrvd
12351 oracle 13 5 39956 38M 38408 S N 31M 25.6 1.1 3:08 oracle
669 oracle 9 0 4780 4780 4384 S 492 24.4 0.1 1:42 oracle
1 root 14 0 476 424 408 R 0 21.6 0.0 1:19 init
2 root 14 0 0 0 0 RW 0 20.8 0.0 1:29 keventd
615 oracle 9 0 8984 8984 8460 S 4380 16.3 0.2 2:41 oracle
388 root 9 0 732 728 592 S 0 11.5 0.0 0:17 syslogd
kswapd bounces up and down from 99%.
Keys for me are the full system time, the fact that the %CPUs seem to
add up to more than 6xCPUs (6-way Xeon), and that processes that aren't
really active show up as "active".
ASAP, I'll try -pre6 and then -aa1 to compare behavior.
The Oracle stress query looks like:
select /*+ parallel(mt,5) cache(mt) */ count(*) from mtable_units ;
Thanks much,
--
Ken.
On Mon, Nov 19, 2001 at 12:31:25PM -0600, Ken Brownfield wrote:
| Linus, so far 2.4.15-pre4 with your patch does not reproduce the kswapd
| issue with Oracle, but I do need to perform more deterministic tests
| before I can fully sign off on that.
|
| BTW, didn't your patch go into -pre5? Or is there an additional mod in
| -pre6 that we should try?
| --
| Ken.
| brownfld@irridia.com
|
| On Mon, Nov 19, 2001 at 10:07:58AM -0800, Linus Torvalds wrote:
| |
| | On Mon, 19 Nov 2001, Sebastian Dröge wrote:
| | > Hi,
| | > I couldn't answer ealier because I had some problems with my ISP
| | > the heavy swapping problem while burning a cd is solved in pre6aa1
| | > but if you want i can do some statistics tommorow
| |
| | Well, pre6aa1 performs really badly exactly because it by default doesn't
| | swap enough even on _normal_ loads because Andrea is playing with some
| | tuning (and see the bad results of that tuning in the VM testing by
| | rwhron@earthlink.net).
| |
| | So the pre6aa1 numbers are kind of suspect - lack of swapping may not be
| | due to fixing the problem, but due to bad tuning.
| |
| | Does plain pre6 solve it? Plain pre6 has a fix where a locked shared
| | memory area would previously cause unnecessary swapping, and maybe the CD
| | burning buffer is using shmlock..
| |
| | Linus
| |
| | -
| | To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| | the body of a message to majordomo@vger.kernel.org
| | More majordomo info at http://vger.kernel.org/majordomo-info.html
| | Please read the FAQ at http://www.tux.org/lkml/
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 18:07 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Linus Torvalds
2001-11-19 18:31 ` Ken Brownfield
@ 2001-11-19 19:44 ` Slo Mo Snail
1 sibling, 0 replies; 30+ messages in thread
From: Slo Mo Snail @ 2001-11-19 19:44 UTC (permalink / raw)
To: linux-kernel, Linus Torvalds
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Am Montag, 19. November 2001 19:07 schrieb Linus Torvalds:
> On Mon, 19 Nov 2001, Sebastian Dröge wrote:
> > Hi,
> > I couldn't answer ealier because I had some problems with my ISP
> > the heavy swapping problem while burning a cd is solved in pre6aa1
> > but if you want i can do some statistics tommorow
>
> Well, pre6aa1 performs really badly exactly because it by default doesn't
> swap enough even on _normal_ loads because Andrea is playing with some
> tuning (and see the bad results of that tuning in the VM testing by
> rwhron@earthlink.net).
>
> So the pre6aa1 numbers are kind of suspect - lack of swapping may not be
> due to fixing the problem, but due to bad tuning.
>
> Does plain pre6 solve it? Plain pre6 has a fix where a locked shared
> memory area would previously cause unnecessary swapping, and maybe the CD
> burning buffer is using shmlock..
Hi,
yes plain pre6 seems to solve it, too. I can't be sure right now because I
have recorded only 3 CDs while running pre6
pre6 swaps more than aa1 but I had so far I had no buffer-underuns and much
of the swap appears in SwapCached
the interactive performance seems to be much better in pre6 than in aa1 so
I'll stay with pre6 ;)
Bye
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE7+WEovIHrJes3kVIRAg+nAJ4issDSimDEal2I08CQHEoXBpGFLQCeNQ1x
AathQZ75U5nhnEZwTkR4WnI=
=lb0O
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 19:23 ` Linus Torvalds
@ 2001-11-19 23:39 ` Ken Brownfield
2001-11-19 23:52 ` Linus Torvalds
0 siblings, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-11-19 23:39 UTC (permalink / raw)
To: Linus Torvalds; +Cc: linux-kernel
I went straight to the aa patch, and it looks like it either fixes the
problem or (because of the side-effects Linus mentioned) otherwise
prevents the issue:
2:30pm up 11 min, 4 users, load average: 2.23, 2.18, 1.17
106 processes: 104 sleeping, 2 running, 0 zombie, 0 stopped
CPU states: 14.7% user, 10.3% system, 0.0% nice, 74.9% idle
Mem: 3342304K av, 3013888K used, 328416K free, 0K shrd, 1224K buff
Swap: 1004052K av, 276824K used, 727228K free 2862112K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
722 oracle 12 0 13364 12M 11856 S 9.9M 29.5 0.3 2:24 oracle
731 oracle 17 0 13488 12M 11980 D 10M 28.7 0.3 2:27 oracle
728 oracle 12 0 13048 12M 11540 R 9816 20.8 0.3 2:22 oracle
718 oracle 12 0 154M 153M 152M S 150M 17.9 4.7 2:22 oracle
725 oracle 14 0 13472 12M 11964 S 10M 17.9 0.3 2:20 oracle
734 oracle 12 0 13936 13M 12432 S 10M 15.3 0.4 2:27 oracle
9 root 9 0 0 0 0 SW 0 4.3 0.0 0:27 kswapd
The machine went into swap immediately when the page cache stopped
growing and hovered at 100-400MB. Also, in my experience the page cache
will grow until there's only 5ishMB of free RAM, but with the aa patch
it looks like it stops at 320MB or maybe 10% of RAM. Was that the aa
patch, or part of -pre6?
It would be nice if that number were modifyable via /proc (writable
freepages again? 10% seems a tad high for many boxes) but I think it's
better to have a bit more purely free RAM available than 5MB.
kswapd isn't going nuts, but it seems to still be eating quite a bit of
CPU given plenty of RAM. And it seems to go pretty hard into swap -- I
would imagine that it's disadvantageous to do significant swapping
(based on age only?) in the presence of a massive page cache. I would
imagine the performance hit of a 2GB vs. 3GB page cache would be less
egregious than the time and I/O kswapd is causing without memory
pressure.
The Oracle SGA is set to ~522MB, with nothing else running except a
couple of sshds, getty, etc. Now that I'm looking, 2.8GB page cache
plus 328MB free adds up to about 3.1GB of RAM -- where does the 512MB
shared memory segment fit? Is it being swapped out in deference to page
cache?
Just my USD$0.02. I'll try vanilla -pre6 with profiling soon and post
results. Thanks for the tip Marcelo.
Thanks,
--
Ken.
brownfld@irridia.com
On Mon, Nov 19, 2001 at 07:23:27PM +0000, Linus Torvalds wrote:
| In article <20011119123125.B1439@asooo.flowerfire.com>,
| Ken Brownfield <brownfld@irridia.com> wrote:
| >Linus, so far 2.4.15-pre4 with your patch does not reproduce the kswapd
| >issue with Oracle, but I do need to perform more deterministic tests
| >before I can fully sign off on that.
| >
| >BTW, didn't your patch go into -pre5? Or is there an additional mod in
| >-pre6 that we should try?
|
| You're right, it's probably in pre5 already..
|
| Anyway, it would be interesting to see if the patch by Andrea (I think
| he called it "zone-watermarks") that changes the zone allocators to take
| other zones into account makes a difference. See separate thread with
| the subject line "15pre6aa1 (fixes google VM problem)".
|
| (I think the patch is overly complex as-is, but I htink the _ideas_ in
| it are fine).
|
| Linus
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 23:39 ` Ken Brownfield
@ 2001-11-19 23:52 ` Linus Torvalds
2001-11-20 0:18 ` M. Edward (Ed) Borasky
` (2 more replies)
0 siblings, 3 replies; 30+ messages in thread
From: Linus Torvalds @ 2001-11-19 23:52 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel, Andrea Arcangeli
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1887 bytes --]
On Mon, 19 Nov 2001, Ken Brownfield wrote:
>
> I went straight to the aa patch, and it looks like it either fixes the
> problem or (because of the side-effects Linus mentioned) otherwise
> prevents the issue:
So is this pre6aa1, or pre6 + just the watermark patch?
> The machine went into swap immediately when the page cache stopped
> growing and hovered at 100-400MB. Also, in my experience the page cache
> will grow until there's only 5ishMB of free RAM, but with the aa patch
> it looks like it stops at 320MB or maybe 10% of RAM. Was that the aa
> patch, or part of -pre6?
That was the watermarking. The way Andrea did it, the page cache will
basically refuse to touch as much of the "normal" page zone, because it
would prefer to allocate more from highmem..
I think it's excessive to have 320MB free memory, though, that's just
an insane waste. I suspect that the real number should be somewhere
between the old behaviour and the new one. You can tweak the behaviour of
andrea's kernel by changing the "reserved" page numbers, but I'd like to
hear whether my simpler approach works too..
> The Oracle SGA is set to ~522MB, with nothing else running except a
> couple of sshds, getty, etc. Now that I'm looking, 2.8GB page cache
> plus 328MB free adds up to about 3.1GB of RAM -- where does the 512MB
> shared memory segment fit? Is it being swapped out in deference to page
> cache?
Shared memory actually uses the page cache too, so it will be accounted
for in the 2.8GB number.
Anyway, can you try plain vanilla pre6, with the appended patch? This is
my suggested simplified version of what Andrea tried to do, and it should
try to keep only a few extra megs of memory free in the low memory
regions, not 300+ MB.
(and the profiling would be interesting regardless, but I think Andrea did
find the real problem, his fix just seems a bit of an overkill ;)
Linus
[-- Attachment #2: Type: TEXT/PLAIN, Size: 1839 bytes --]
diff -u --recursive --new-file pre6/linux/mm/page_alloc.c linux/mm/page_alloc.c
--- pre6/linux/mm/page_alloc.c Sat Nov 17 19:07:43 2001
+++ linux/mm/page_alloc.c Mon Nov 19 15:13:36 2001
@@ -299,29 +299,26 @@
return page;
}
-static inline unsigned long zone_free_pages(zone_t * zone, unsigned int order)
-{
- long free = zone->free_pages - (1UL << order);
- return free >= 0 ? free : 0;
-}
-
/*
* This is the 'heart' of the zoned buddy allocator:
*/
struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_t *zonelist)
{
+ unsigned long min;
zone_t **zone, * classzone;
struct page * page;
int freed;
zone = zonelist->zones;
classzone = *zone;
+ min = 1UL << order;
for (;;) {
zone_t *z = *(zone++);
if (!z)
break;
- if (zone_free_pages(z, order) > z->pages_low) {
+ min += z->pages_low;
+ if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
@@ -334,16 +331,18 @@
wake_up_interruptible(&kswapd_wait);
zone = zonelist->zones;
+ min = 1UL << order;
for (;;) {
- unsigned long min;
+ unsigned long local_min;
zone_t *z = *(zone++);
if (!z)
break;
- min = z->pages_min;
+ local_min = z->pages_min;
if (!(gfp_mask & __GFP_WAIT))
- min >>= 2;
- if (zone_free_pages(z, order) > min) {
+ local_min >>= 2;
+ min += local_min;
+ if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
@@ -376,12 +375,14 @@
return page;
zone = zonelist->zones;
+ min = 1UL << order;
for (;;) {
zone_t *z = *(zone++);
if (!z)
break;
- if (zone_free_pages(z, order) > z->pages_min) {
+ min += z->pages_min;
+ if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 23:52 ` Linus Torvalds
@ 2001-11-20 0:18 ` M. Edward (Ed) Borasky
2001-11-20 0:25 ` Ken Brownfield
2001-11-20 3:09 ` Ken Brownfield
2 siblings, 0 replies; 30+ messages in thread
From: M. Edward (Ed) Borasky @ 2001-11-20 0:18 UTC (permalink / raw)
To: linux-kernel
On a related note, the files "/usr/src/linux/Documentation/filesystems/proc.txt"
and "sysctl/vm.txt" refer to some variables I need to be able to set on a
system running 2.4.12. In particular, I need to be able to get to the values
in "/proc/sys/vm/freepages", "/proc/sys/vm/buffermem" and
"/proc/sys/vm/pagecache". However, despite their existence in the documentation
files, these files don't exist on a 2.4.12 system. How can I read and set these
values on a 2.4.12 system?
--
znmeb@aracnet.com (M. Edward Borasky) http://www.aracnet.com/~znmeb
Relax! Run Your Own Brain with Neuro-Semantics!
http://www.meta-trading-coach.com
"Outside of a dog, a book is a man's best friend. Inside a dog, it's
too dark to read." -- Marx
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 23:52 ` Linus Torvalds
2001-11-20 0:18 ` M. Edward (Ed) Borasky
@ 2001-11-20 0:25 ` Ken Brownfield
2001-11-20 0:31 ` Linus Torvalds
2001-11-20 3:09 ` Ken Brownfield
2 siblings, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-11-20 0:25 UTC (permalink / raw)
To: Linus Torvalds; +Cc: linux-kernel, Andrea Arcangeli
On Mon, Nov 19, 2001 at 03:52:44PM -0800, Linus Torvalds wrote:
|
| On Mon, 19 Nov 2001, Ken Brownfield wrote:
| >
| > I went straight to the aa patch, and it looks like it either fixes the
| > problem or (because of the side-effects Linus mentioned) otherwise
| > prevents the issue:
|
| So is this pre6aa1, or pre6 + just the watermark patch?
I'm currently using -pre6 with his separately-posted zone-watermark-1
patch. Sorry, I should have been clearer.
| > The machine went into swap immediately when the page cache stopped
| > growing and hovered at 100-400MB. Also, in my experience the page cache
| > will grow until there's only 5ishMB of free RAM, but with the aa patch
| > it looks like it stops at 320MB or maybe 10% of RAM. Was that the aa
| > patch, or part of -pre6?
|
| That was the watermarking. The way Andrea did it, the page cache will
| basically refuse to touch as much of the "normal" page zone, because it
| would prefer to allocate more from highmem..
|
| I think it's excessive to have 320MB free memory, though, that's just
| an insane waste. I suspect that the real number should be somewhere
| between the old behaviour and the new one. You can tweak the behaviour of
| andrea's kernel by changing the "reserved" page numbers, but I'd like to
| hear whether my simpler approach works too..
Yeah, maybe a tiered default would be best, IMHO. 5MB on a 3GB box
does, on the other hand, seem anemic.
| > The Oracle SGA is set to ~522MB, with nothing else running except a
| > couple of sshds, getty, etc. Now that I'm looking, 2.8GB page cache
| > plus 328MB free adds up to about 3.1GB of RAM -- where does the 512MB
| > shared memory segment fit? Is it being swapped out in deference to page
| > cache?
|
| Shared memory actually uses the page cache too, so it will be accounted
| for in the 2.8GB number.
My bad, should have realized.
| Anyway, can you try plain vanilla pre6, with the appended patch? This is
| my suggested simplified version of what Andrea tried to do, and it should
| try to keep only a few extra megs of memory free in the low memory
| regions, not 300+ MB.
|
| (and the profiling would be interesting regardless, but I think Andrea did
| find the real problem, his fix just seems a bit of an overkill ;)
|
| Linus
I'll try this patch ASAP.
Thanks a LOT to all involved,
--
Ken.
brownfld@irridia.com
| diff -u --recursive --new-file pre6/linux/mm/page_alloc.c linux/mm/page_alloc.c
| --- pre6/linux/mm/page_alloc.c Sat Nov 17 19:07:43 2001
| +++ linux/mm/page_alloc.c Mon Nov 19 15:13:36 2001
| @@ -299,29 +299,26 @@
| return page;
| }
|
| -static inline unsigned long zone_free_pages(zone_t * zone, unsigned int order)
| -{
| - long free = zone->free_pages - (1UL << order);
| - return free >= 0 ? free : 0;
| -}
| -
| /*
| * This is the 'heart' of the zoned buddy allocator:
| */
| struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_t *zonelist)
| {
| + unsigned long min;
| zone_t **zone, * classzone;
| struct page * page;
| int freed;
|
| zone = zonelist->zones;
| classzone = *zone;
| + min = 1UL << order;
| for (;;) {
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - if (zone_free_pages(z, order) > z->pages_low) {
| + min += z->pages_low;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
| @@ -334,16 +331,18 @@
| wake_up_interruptible(&kswapd_wait);
|
| zone = zonelist->zones;
| + min = 1UL << order;
| for (;;) {
| - unsigned long min;
| + unsigned long local_min;
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - min = z->pages_min;
| + local_min = z->pages_min;
| if (!(gfp_mask & __GFP_WAIT))
| - min >>= 2;
| - if (zone_free_pages(z, order) > min) {
| + local_min >>= 2;
| + min += local_min;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
| @@ -376,12 +375,14 @@
| return page;
|
| zone = zonelist->zones;
| + min = 1UL << order;
| for (;;) {
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - if (zone_free_pages(z, order) > z->pages_min) {
| + min += z->pages_min;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-20 0:25 ` Ken Brownfield
@ 2001-11-20 0:31 ` Linus Torvalds
0 siblings, 0 replies; 30+ messages in thread
From: Linus Torvalds @ 2001-11-20 0:31 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel, Andrea Arcangeli
On Mon, 19 Nov 2001, Ken Brownfield wrote:
> |
> | So is this pre6aa1, or pre6 + just the watermark patch?
>
> I'm currently using -pre6 with his separately-posted zone-watermark-1
> patch. Sorry, I should have been clearer.
Good. That removes the other variables from the equation, ie it's not an
effect of some of the other tweaking in the -aa patches.
> Yeah, maybe a tiered default would be best, IMHO. 5MB on a 3GB box
> does, on the other hand, seem anemic.
Yeah, the 5MB _is_ anemic. It comes from the fact that we decide to never
bother having more than zone_balance_max[] pages free, even if we have
tons of memory. And zone_balance_max[] is fairly small, it limits us to
255 free pages per zone (for page_min - wth "page_low" being twice that).
So you get 3 zones, with 255*2 pages free max each, except the DMA zone
has much less just because it's smaller. Thus 5MB.
There's no real reason for having zone_balance_max[] at all - without it
we'd just always try to keep about 1/128th of memory free, which would be
about 24MB on a 3GB box. Which is probably not a bad idea.
With my "simplified-Andrea" patch, you should see slightly more than 5MB
free, but not a lot more. A HIGHMEM allocation now wants to leave an
"extra" 510 pages in NORMAL, and even more in the DMA zone, so you should
see something like maybe 12-15 MB free instead of 300MB.
(Wild hand-waving number, I'm too lazy to actually do the math, and I
haven't even tested that the simple patch works at all - I think I forgot
to mention that small detail ;)
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* RE: [VM] 2.4.14/15-pre4 too "swap-happy"?
@ 2001-11-20 0:48 Yan, Noah
0 siblings, 0 replies; 30+ messages in thread
From: Yan, Noah @ 2001-11-20 0:48 UTC (permalink / raw)
To: 'Linus Torvalds', Ken Brownfield; +Cc: linux-kernel, Andrea Arcangeli
Hi, all
Just want to know is there any research/development work now on Linux kernel for IA-64, such as Intel Itanium?
Best Regards,
Noah Yan
SC/Automation Group
Shanghai Site Manufacturing Computing/IT
Intel Technology (China) Ltd.
IDD: (86 21) 50481818 - 31579
Fax: (86 21) 50481212
Email: noah.yan@intel.com
-----Original Message-----
From: Linus Torvalds [mailto:torvalds@transmeta.com]
Sent: 2001?11?20? 8:31
To: Ken Brownfield
Cc: linux-kernel@vger.kernel.org; Andrea Arcangeli
Subject: Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
On Mon, 19 Nov 2001, Ken Brownfield wrote:
> |
> | So is this pre6aa1, or pre6 + just the watermark patch?
>
> I'm currently using -pre6 with his separately-posted zone-watermark-1
> patch. Sorry, I should have been clearer.
Good. That removes the other variables from the equation, ie it's not an
effect of some of the other tweaking in the -aa patches.
> Yeah, maybe a tiered default would be best, IMHO. 5MB on a 3GB box
> does, on the other hand, seem anemic.
Yeah, the 5MB _is_ anemic. It comes from the fact that we decide to never
bother having more than zone_balance_max[] pages free, even if we have
tons of memory. And zone_balance_max[] is fairly small, it limits us to
255 free pages per zone (for page_min - wth "page_low" being twice that).
So you get 3 zones, with 255*2 pages free max each, except the DMA zone
has much less just because it's smaller. Thus 5MB.
There's no real reason for having zone_balance_max[] at all - without it
we'd just always try to keep about 1/128th of memory free, which would be
about 24MB on a 3GB box. Which is probably not a bad idea.
With my "simplified-Andrea" patch, you should see slightly more than 5MB
free, but not a lot more. A HIGHMEM allocation now wants to leave an
"extra" 510 pages in NORMAL, and even more in the DMA zone, so you should
see something like maybe 12-15 MB free instead of 300MB.
(Wild hand-waving number, I'm too lazy to actually do the math, and I
haven't even tested that the simple patch works at all - I think I forgot
to mention that small detail ;)
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-19 23:52 ` Linus Torvalds
2001-11-20 0:18 ` M. Edward (Ed) Borasky
2001-11-20 0:25 ` Ken Brownfield
@ 2001-11-20 3:09 ` Ken Brownfield
2001-11-20 3:30 ` Linus Torvalds
2001-11-20 3:32 ` Andrea Arcangeli
2 siblings, 2 replies; 30+ messages in thread
From: Ken Brownfield @ 2001-11-20 3:09 UTC (permalink / raw)
To: Linus Torvalds; +Cc: linux-kernel, Andrea Arcangeli
Well, I think you'll be pleased to hear that your untested patch
compiled, booted, _and_ fixed the problem. :)
The minimum free RAM was about 9.8-11MB (matching your guestimate) and
kswapd seemed to behave the same as the watermark patch. The results of
top were basically the same, so I'm omitting it.
However, I do have some profiling numbers, thanks to Marcelo. Attached
are numbers from "readprofile | sort -nr +2 | head -20". I think the
pre4 numbers point to shrink_cache, prune_icache, and statm_pgd_range.
The other two might have significance for wizards, but statistically
don't stand out to me, except maybe statm_pgd_range.
I reset the counters just before starting Oracle and the stress test. I
think a -pre7 with a blessed patch would be good, since my testing was
very narrow.
I'll test new kernels as I hear new info.
Thanks much!
--
Ken.
brownfld@irridia.com
2.4.15-pre4 with your original patch:
(shorter time period since the machine went to hell fast)
(matches vanilla behaviour)
164536 default_idle 3164.1538
101562 shrink_cache 113.8587
3683 prune_icache 13.5404
3034 file_read_actor 12.2339
914 DAC960_BA_InterruptHandler 5.5732
1128 statm_pgd_range 2.9072
40 page_cache_release 0.8333
31 add_page_to_hash_queue 0.5167
89 page_cache_read 0.4363
25 remove_inode_page 0.4167
26 unlock_page 0.3095
509 __make_request 0.3008
66 smp_call_function 0.2946
21 set_bh_page 0.2917
9 __brelse 0.2812
90 try_to_free_buffers 0.2778
13 mark_page_accessed 0.2708
8 __free_pages 0.2500
43 get_hash_table 0.2443
42 activate_page 0.2234
2.4.15-pre6 with watermark patch:
1617446 default_idle 31104.7308
27599 DAC960_BA_InterruptHandler 168.2866
38918 file_read_actor 156.9274
528 page_cache_release 11.0000
554 add_page_to_hash_queue 9.2333
15487 __make_request 9.1531
3453 statm_pgd_range 8.8995
514 remove_inode_page 8.5667
1453 blk_init_free_list 7.2650
377 set_bh_page 5.2361
898 page_cache_read 4.4020
590 add_to_page_cache_unique 4.3382
136 __brelse 4.2500
1120 kmem_cache_alloc 3.8356
628 kunmap_high 3.7381
1189 try_to_free_buffers 3.6698
625 get_hash_table 3.5511
439 lru_cache_add 3.4297
1715 rmqueue 3.0194
105 remove_wait_queue 2.9167
2.4.15-pre6 with Linus patch:
1249875 default_idle 24036.0577
65324 file_read_actor 263.4032
36979 DAC960_BA_InterruptHandler 225.4817
9809 statm_pgd_range 25.2809
1039 page_cache_release 21.6458
994 add_page_to_hash_queue 16.5667
922 remove_inode_page 15.3667
2409 blk_init_free_list 12.0450
20159 __make_request 11.9143
1198 lru_cache_add 9.3594
1628 page_cache_read 7.9804
987 add_to_page_cache_unique 7.2574
2202 try_to_free_buffers 6.7963
1038 get_unused_buffer_head 6.6538
484 unlock_page 5.7619
3182 rmqueue 5.6021
874 kunmap_high 5.2024
164 __brelse 5.1250
900 get_hash_table 5.1136
357 set_bh_page 4.9583
On Mon, Nov 19, 2001 at 03:52:44PM -0800, Linus Torvalds wrote:
|
| On Mon, 19 Nov 2001, Ken Brownfield wrote:
| >
| > I went straight to the aa patch, and it looks like it either fixes the
| > problem or (because of the side-effects Linus mentioned) otherwise
| > prevents the issue:
|
| So is this pre6aa1, or pre6 + just the watermark patch?
|
| > The machine went into swap immediately when the page cache stopped
| > growing and hovered at 100-400MB. Also, in my experience the page cache
| > will grow until there's only 5ishMB of free RAM, but with the aa patch
| > it looks like it stops at 320MB or maybe 10% of RAM. Was that the aa
| > patch, or part of -pre6?
|
| That was the watermarking. The way Andrea did it, the page cache will
| basically refuse to touch as much of the "normal" page zone, because it
| would prefer to allocate more from highmem..
|
| I think it's excessive to have 320MB free memory, though, that's just
| an insane waste. I suspect that the real number should be somewhere
| between the old behaviour and the new one. You can tweak the behaviour of
| andrea's kernel by changing the "reserved" page numbers, but I'd like to
| hear whether my simpler approach works too..
|
| > The Oracle SGA is set to ~522MB, with nothing else running except a
| > couple of sshds, getty, etc. Now that I'm looking, 2.8GB page cache
| > plus 328MB free adds up to about 3.1GB of RAM -- where does the 512MB
| > shared memory segment fit? Is it being swapped out in deference to page
| > cache?
|
| Shared memory actually uses the page cache too, so it will be accounted
| for in the 2.8GB number.
|
| Anyway, can you try plain vanilla pre6, with the appended patch? This is
| my suggested simplified version of what Andrea tried to do, and it should
| try to keep only a few extra megs of memory free in the low memory
| regions, not 300+ MB.
|
| (and the profiling would be interesting regardless, but I think Andrea did
| find the real problem, his fix just seems a bit of an overkill ;)
|
| Linus
| diff -u --recursive --new-file pre6/linux/mm/page_alloc.c linux/mm/page_alloc.c
| --- pre6/linux/mm/page_alloc.c Sat Nov 17 19:07:43 2001
| +++ linux/mm/page_alloc.c Mon Nov 19 15:13:36 2001
| @@ -299,29 +299,26 @@
| return page;
| }
|
| -static inline unsigned long zone_free_pages(zone_t * zone, unsigned int order)
| -{
| - long free = zone->free_pages - (1UL << order);
| - return free >= 0 ? free : 0;
| -}
| -
| /*
| * This is the 'heart' of the zoned buddy allocator:
| */
| struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_t *zonelist)
| {
| + unsigned long min;
| zone_t **zone, * classzone;
| struct page * page;
| int freed;
|
| zone = zonelist->zones;
| classzone = *zone;
| + min = 1UL << order;
| for (;;) {
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - if (zone_free_pages(z, order) > z->pages_low) {
| + min += z->pages_low;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
| @@ -334,16 +331,18 @@
| wake_up_interruptible(&kswapd_wait);
|
| zone = zonelist->zones;
| + min = 1UL << order;
| for (;;) {
| - unsigned long min;
| + unsigned long local_min;
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - min = z->pages_min;
| + local_min = z->pages_min;
| if (!(gfp_mask & __GFP_WAIT))
| - min >>= 2;
| - if (zone_free_pages(z, order) > min) {
| + local_min >>= 2;
| + min += local_min;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
| @@ -376,12 +375,14 @@
| return page;
|
| zone = zonelist->zones;
| + min = 1UL << order;
| for (;;) {
| zone_t *z = *(zone++);
| if (!z)
| break;
|
| - if (zone_free_pages(z, order) > z->pages_min) {
| + min += z->pages_min;
| + if (z->free_pages > min) {
| page = rmqueue(z, order);
| if (page)
| return page;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-20 3:09 ` Ken Brownfield
@ 2001-11-20 3:30 ` Linus Torvalds
2001-11-20 3:32 ` Andrea Arcangeli
1 sibling, 0 replies; 30+ messages in thread
From: Linus Torvalds @ 2001-11-20 3:30 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel, Andrea Arcangeli
On Mon, 19 Nov 2001, Ken Brownfield wrote:
>
> Well, I think you'll be pleased to hear that your untested patch
> compiled, booted, _and_ fixed the problem. :)
Good. The patch itself was fairly simple, and the problem was
straightforward, the real credit for the fix goes to Andrea for thinking
about what was wrong with the old code..
> The minimum free RAM was about 9.8-11MB (matching your guestimate) and
> kswapd seemed to behave the same as the watermark patch. The results of
> top were basically the same, so I'm omitting it.
All right. I think 10MB free for a 3GB machine is good - and we can easily
tweak the zone_balance_max[] numbers if somebody comes to the conclusion
that it's better to have more free. It's about .3% of RAM, so it's small
enough that it's certainly not too much, and yet at the same time it's
probably enough to give reasonable behaviour in a temporary memory crunch.
> However, I do have some profiling numbers, thanks to Marcelo. Attached
> are numbers from "readprofile | sort -nr +2 | head -20". I think the
> pre4 numbers point to shrink_cache, prune_icache, and statm_pgd_range.
> The other two might have significance for wizards, but statistically
> don't stand out to me, except maybe statm_pgd_range.
I'd say that this clearly shows that yes, 2.4.14 did the wrong thing, and
wasted time in shrink_cache() without making any real progress. The two
other profiles look reasonable to me - nothing stands out that shouldn't.
(yeah, we spend _much_ too much time doing VM statistics with "top", and
the only way to get rid of that would be to add a per-vma "rss" field.
Which might not be a bad idea, but it's not a high priority for me).
> I reset the counters just before starting Oracle and the stress test. I
> think a -pre7 with a blessed patch would be good, since my testing was
> very narrow.
Sude, I'll do a pre7. This closes my last behaviour issue with the VM,
although I'm sure we'll end up spending tons of time chasing bugs still
(both VM and not).
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-20 3:09 ` Ken Brownfield
2001-11-20 3:30 ` Linus Torvalds
@ 2001-11-20 3:32 ` Andrea Arcangeli
2001-11-20 5:54 ` Ken Brownfield
1 sibling, 1 reply; 30+ messages in thread
From: Andrea Arcangeli @ 2001-11-20 3:32 UTC (permalink / raw)
To: Ken Brownfield; +Cc: Linus Torvalds, linux-kernel
On Mon, Nov 19, 2001 at 09:09:41PM -0600, Ken Brownfield wrote:
> Well, I think you'll be pleased to hear that your untested patch
> compiled, booted, _and_ fixed the problem. :)
Can you try to run an updatedb constantly in background?
Andrea
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-20 3:32 ` Andrea Arcangeli
@ 2001-11-20 5:54 ` Ken Brownfield
2001-11-20 6:50 ` Linus Torvalds
0 siblings, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-11-20 5:54 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Linus Torvalds, linux-kernel
kswapd goes up to 5-10% CPU (vs 3-6) but it finishes without issue or
apparent interactivity problems. I'm keeping it in while( 1 ), but it's
been predictable so far.
3-10 is a lot better than 99, but is kswapd really going to eat that
much CPU in an essentially allocation-less state?
But certainly you found the right thing.
Thx all!
--
Ken.
brownfld@irridia.com
On Tue, Nov 20, 2001 at 04:32:23AM +0100, Andrea Arcangeli wrote:
| On Mon, Nov 19, 2001 at 09:09:41PM -0600, Ken Brownfield wrote:
| > Well, I think you'll be pleased to hear that your untested patch
| > compiled, booted, _and_ fixed the problem. :)
|
| Can you try to run an updatedb constantly in background?
|
| Andrea
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [VM] 2.4.14/15-pre4 too "swap-happy"?
2001-11-20 5:54 ` Ken Brownfield
@ 2001-11-20 6:50 ` Linus Torvalds
2001-12-01 13:15 ` Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?) Ken Brownfield
0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2001-11-20 6:50 UTC (permalink / raw)
To: linux-kernel
In article <20011119235422.F10597@asooo.flowerfire.com>,
Ken Brownfield <brownfld@irridia.com> wrote:
>kswapd goes up to 5-10% CPU (vs 3-6) but it finishes without issue or
>apparent interactivity problems. I'm keeping it in while( 1 ), but it's
>been predictable so far.
>
>3-10 is a lot better than 99, but is kswapd really going to eat that
>much CPU in an essentially allocation-less state?
Well, it's obviously not allocation-less: updatedb will really hit on
the dcache and icache (which are both in the NORMAL zone only, which is
why Andrea asked for it), and obviously your Oracle load itself seems to
be happily paging stuff around, which causes a lot of allocations for
page-ins.
It only _looks_ static, because once you find the proper "balance", the
VM numbers themselves shouldn't change under a constant load.
We could make kswapd use less CPU time, of course, simply by making the
actual working processes do more of the work to free memory. The total
work ends up being the same, though, and the advantage of kswapd is that
it tends to make the freeing slightly more asynchronous, which helps
throughput.
The _disadvantage_ of kswapd is that if it goes crazy and uses up all
CPU time, you get bad results ;)
But it doesn't sound crazy in your load. I'd be happier if the VM took
less CPU, of course, but for now we seem to be doing ok.
Linus
^ permalink raw reply [flat|nested] 30+ messages in thread
* Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?)
2001-11-20 6:50 ` Linus Torvalds
@ 2001-12-01 13:15 ` Ken Brownfield
2001-12-08 13:12 ` Ken Brownfield
0 siblings, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-12-01 13:15 UTC (permalink / raw)
To: Linus Torvalds; +Cc: linux-kernel
When updatedb kicked off on my 2.4.16 6-way Xeon 4GB box this morning, I
had an unfortunate flashback:
5:02am up 2 days, 1 min, 59 users, load average: 5.66, 4.86, 3.60
741 processes: 723 sleeping, 4 running, 0 zombie, 14 stopped
CPU states: 0.2% user, 77.3% system, 0.0% nice, 22.3% idle
Mem: 3351664K av, 3346504K used, 5160K free, 0K shrd, 498048K buff
Swap: 1052248K av, 282608K used, 769640K free 2531892K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
2117 root 15 5 580 580 408 R N 0 99.9 0.0 17:19 updatedb
2635 kb 12 0 1696 1556 1216 R 0 99.9 0.0 4:16 smbd
2672 root 17 10 4212 4212 492 D N 0 94.7 0.1 1:39 rsync
2609 root 2 -20 1284 1284 672 R < 0 81.2 0.0 4:02 top
9 root 9 0 0 0 0 SW 0 80.7 0.0 42:50 kswapd
22879 kb 9 0 11548 6316 1684 S 0 11.8 0.1 7:33 smbd
Under varied load I'm not seeing the kswapd issue, but it looks like
updatedb combined with one or two samba transfers does still reproduce
the problem easily, and adding rsync or NFS transfers to the mix makes
kswapd peg at 99%.
I noticed because I was trying to do kernel patches and compiles using a
partition NFS-mounted from this machine. I guess it sometimes pays to
be up at 5am...
Unfortunately it's difficult for me to reboot this machine to update the
kernel (59 users) but I will try to reproduce the problem on a separate
machine this weekend or early next week. And I don't have profiling on,
so that will have to wait as well. :-(
Andrea, do you have a patch vs. 2.4.16 of your original solution to this
problem that I could test out? I'd rather just change one thing at a
time rather than switching completely to an -aa kernel.
Grrrr!
Thanks much,
--
Ken.
brownfld@irridia.com
On Tue, Nov 20, 2001 at 06:50:50AM +0000, Linus Torvalds wrote:
| In article <20011119235422.F10597@asooo.flowerfire.com>,
| Ken Brownfield <brownfld@irridia.com> wrote:
| >kswapd goes up to 5-10% CPU (vs 3-6) but it finishes without issue or
| >apparent interactivity problems. I'm keeping it in while( 1 ), but it's
| >been predictable so far.
| >
| >3-10 is a lot better than 99, but is kswapd really going to eat that
| >much CPU in an essentially allocation-less state?
|
| Well, it's obviously not allocation-less: updatedb will really hit on
| the dcache and icache (which are both in the NORMAL zone only, which is
| why Andrea asked for it), and obviously your Oracle load itself seems to
| be happily paging stuff around, which causes a lot of allocations for
| page-ins.
|
| It only _looks_ static, because once you find the proper "balance", the
| VM numbers themselves shouldn't change under a constant load.
|
| We could make kswapd use less CPU time, of course, simply by making the
| actual working processes do more of the work to free memory. The total
| work ends up being the same, though, and the advantage of kswapd is that
| it tends to make the freeing slightly more asynchronous, which helps
| throughput.
|
| The _disadvantage_ of kswapd is that if it goes crazy and uses up all
| CPU time, you get bad results ;)
|
| But it doesn't sound crazy in your load. I'd be happier if the VM took
| less CPU, of course, but for now we seem to be doing ok.
|
| Linus
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?)
2001-12-01 13:15 ` Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?) Ken Brownfield
@ 2001-12-08 13:12 ` Ken Brownfield
2001-12-09 18:51 ` Marcelo Tosatti
0 siblings, 1 reply; 30+ messages in thread
From: Ken Brownfield @ 2001-12-08 13:12 UTC (permalink / raw)
To: linux-kernel
Just a quick followup to this, which is still a near show-stopper issue
for me.
This is easy to reproduce for me if I run updatedb locally, and then run
updatedb on a remote machine that's scanning an NFS-mounted filesystem
from the original local machine. Instant kswapd saturation, especially
on large filesystems.
Doing updatedb on NFS-mounted filesystems also seems to cause kswapd to
peg on the NFS-client side as well.
I recently realized that slocate (at least on RH6.2 w/ 2.4 kernels) does
not seem to properly detect NFS when provided "-f nfs"... Urgh.
Also something I noticed in slab_info (other info below):
inode_cache 369188 1027256 480 59716 128407 1 : 124 62
dentry_cache 256380 705510 128 14946 23517 1 : 252 126
buffer_head 46961 47800 96 1195 1195 1 : 252 126
That seems like a TON of {dentry,inode}_cache on a 1GB (HIMEM) machine.
I'd try 10_vm-19 but it doesn't apply cleanly for me.
Thanks for any input or ports of 10_vm-19 to 2.4.17-pre6. ;)
--
Ken.
brownfld@irridia.com
total: used: free: shared: buffers: cached:
Mem: 1054011392 900526080 153485312 0 67829760 174866432
Swap: 2149548032 581632 2148966400
MemTotal: 1029308 kB
MemFree: 149888 kB
MemShared: 0 kB
Buffers: 66240 kB
Cached: 170376 kB
SwapCached: 392 kB
Active: 202008 kB
Inactive: 40380 kB
HighTotal: 131008 kB
HighFree: 30604 kB
LowTotal: 898300 kB
LowFree: 119284 kB
SwapTotal: 2099168 kB
SwapFree: 2098600 kB
Mem: 1029308K av, 886144K used, 143164K free, 0K shrd, 66240K buff
Swap: 2099168K av, 568K used, 2098600K free 170872K cached
On Sat, Dec 01, 2001 at 07:15:02AM -0600, Ken Brownfield wrote:
| When updatedb kicked off on my 2.4.16 6-way Xeon 4GB box this morning, I
| had an unfortunate flashback:
|
| 5:02am up 2 days, 1 min, 59 users, load average: 5.66, 4.86, 3.60
| 741 processes: 723 sleeping, 4 running, 0 zombie, 14 stopped
| CPU states: 0.2% user, 77.3% system, 0.0% nice, 22.3% idle
| Mem: 3351664K av, 3346504K used, 5160K free, 0K shrd, 498048K buff
| Swap: 1052248K av, 282608K used, 769640K free 2531892K cached
|
| PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
| 2117 root 15 5 580 580 408 R N 0 99.9 0.0 17:19 updatedb
| 2635 kb 12 0 1696 1556 1216 R 0 99.9 0.0 4:16 smbd
| 2672 root 17 10 4212 4212 492 D N 0 94.7 0.1 1:39 rsync
| 2609 root 2 -20 1284 1284 672 R < 0 81.2 0.0 4:02 top
| 9 root 9 0 0 0 0 SW 0 80.7 0.0 42:50 kswapd
| 22879 kb 9 0 11548 6316 1684 S 0 11.8 0.1 7:33 smbd
|
| Under varied load I'm not seeing the kswapd issue, but it looks like
| updatedb combined with one or two samba transfers does still reproduce
| the problem easily, and adding rsync or NFS transfers to the mix makes
| kswapd peg at 99%.
|
| I noticed because I was trying to do kernel patches and compiles using a
| partition NFS-mounted from this machine. I guess it sometimes pays to
| be up at 5am...
|
| Unfortunately it's difficult for me to reboot this machine to update the
| kernel (59 users) but I will try to reproduce the problem on a separate
| machine this weekend or early next week. And I don't have profiling on,
| so that will have to wait as well. :-(
|
| Andrea, do you have a patch vs. 2.4.16 of your original solution to this
| problem that I could test out? I'd rather just change one thing at a
| time rather than switching completely to an -aa kernel.
|
| Grrrr!
|
| Thanks much,
| --
| Ken.
| brownfld@irridia.com
|
|
| On Tue, Nov 20, 2001 at 06:50:50AM +0000, Linus Torvalds wrote:
| | In article <20011119235422.F10597@asooo.flowerfire.com>,
| | Ken Brownfield <brownfld@irridia.com> wrote:
| | >kswapd goes up to 5-10% CPU (vs 3-6) but it finishes without issue or
| | >apparent interactivity problems. I'm keeping it in while( 1 ), but it's
| | >been predictable so far.
| | >
| | >3-10 is a lot better than 99, but is kswapd really going to eat that
| | >much CPU in an essentially allocation-less state?
| |
| | Well, it's obviously not allocation-less: updatedb will really hit on
| | the dcache and icache (which are both in the NORMAL zone only, which is
| | why Andrea asked for it), and obviously your Oracle load itself seems to
| | be happily paging stuff around, which causes a lot of allocations for
| | page-ins.
| |
| | It only _looks_ static, because once you find the proper "balance", the
| | VM numbers themselves shouldn't change under a constant load.
| |
| | We could make kswapd use less CPU time, of course, simply by making the
| | actual working processes do more of the work to free memory. The total
| | work ends up being the same, though, and the advantage of kswapd is that
| | it tends to make the freeing slightly more asynchronous, which helps
| | throughput.
| |
| | The _disadvantage_ of kswapd is that if it goes crazy and uses up all
| | CPU time, you get bad results ;)
| |
| | But it doesn't sound crazy in your load. I'd be happier if the VM took
| | less CPU, of course, but for now we seem to be doing ok.
| |
| | Linus
| | -
| | To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| | the body of a message to majordomo@vger.kernel.org
| | More majordomo info at http://vger.kernel.org/majordomo-info.html
| | Please read the FAQ at http://www.tux.org/lkml/
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?)
2001-12-08 13:12 ` Ken Brownfield
@ 2001-12-09 18:51 ` Marcelo Tosatti
2001-12-10 6:56 ` Ken Brownfield
0 siblings, 1 reply; 30+ messages in thread
From: Marcelo Tosatti @ 2001-12-09 18:51 UTC (permalink / raw)
To: Ken Brownfield; +Cc: linux-kernel
On Sat, 8 Dec 2001, Ken Brownfield wrote:
> Just a quick followup to this, which is still a near show-stopper issue
> for me.
>
> This is easy to reproduce for me if I run updatedb locally, and then run
> updatedb on a remote machine that's scanning an NFS-mounted filesystem
> from the original local machine. Instant kswapd saturation, especially
> on large filesystems.
>
> Doing updatedb on NFS-mounted filesystems also seems to cause kswapd to
> peg on the NFS-client side as well.
Can you reproduce the problem without the over NFS updatedb?
Thanks
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?)
2001-12-09 18:51 ` Marcelo Tosatti
@ 2001-12-10 6:56 ` Ken Brownfield
0 siblings, 0 replies; 30+ messages in thread
From: Ken Brownfield @ 2001-12-10 6:56 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: linux-kernel
Yes, any kind of fairly heavy, spread-out I/O combined with updatedb
will do the trick, like samba. NFS isn't required, it just seems to be
a particularly good trigger.
It seems like anything that hits the inode/dentry caches hard, actually,
and doesn't always happen when freepages (or its 2.4.x equivalent) has
been hit. I had a little applet that malloc'ed and memcpy'ed 1GB of RAM
and exited, which doesn't really help like it did before 2.4.15-pre[56].
It also happens for me a lot more with my 4GB machines, though I have
seen it on my 1GB HIGHMEM boxes as well. If the problem is related to
scanning the cache, perhaps more RAM simply makes it worse.
I'm planning on trying Andrew Morton's patches as soon as I'm able.
Thanks,
--
Ken.
brownfld@irridia.com
On Sun, Dec 09, 2001 at 04:51:14PM -0200, Marcelo Tosatti wrote:
|
|
| On Sat, 8 Dec 2001, Ken Brownfield wrote:
|
| > Just a quick followup to this, which is still a near show-stopper issue
| > for me.
| >
| > This is easy to reproduce for me if I run updatedb locally, and then run
| > updatedb on a remote machine that's scanning an NFS-mounted filesystem
| > from the original local machine. Instant kswapd saturation, especially
| > on large filesystems.
| >
| > Doing updatedb on NFS-mounted filesystems also seems to cause kswapd to
| > peg on the NFS-client side as well.
|
| Can you reproduce the problem without the over NFS updatedb?
|
| Thanks
|
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to majordomo@vger.kernel.org
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2001-12-10 6:57 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <200111191801.fAJI1l922388@neosilicon.transmeta.com>
2001-11-19 18:07 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Linus Torvalds
2001-11-19 18:31 ` Ken Brownfield
2001-11-19 19:23 ` Linus Torvalds
2001-11-19 23:39 ` Ken Brownfield
2001-11-19 23:52 ` Linus Torvalds
2001-11-20 0:18 ` M. Edward (Ed) Borasky
2001-11-20 0:25 ` Ken Brownfield
2001-11-20 0:31 ` Linus Torvalds
2001-11-20 3:09 ` Ken Brownfield
2001-11-20 3:30 ` Linus Torvalds
2001-11-20 3:32 ` Andrea Arcangeli
2001-11-20 5:54 ` Ken Brownfield
2001-11-20 6:50 ` Linus Torvalds
2001-12-01 13:15 ` Slight Return (was Re: [VM] 2.4.14/15-pre4 too "swap-happy"?) Ken Brownfield
2001-12-08 13:12 ` Ken Brownfield
2001-12-09 18:51 ` Marcelo Tosatti
2001-12-10 6:56 ` Ken Brownfield
2001-11-19 19:30 ` [VM] 2.4.14/15-pre4 too "swap-happy"? Ken Brownfield
2001-11-19 18:26 ` Marcelo Tosatti
2001-11-19 19:44 ` Slo Mo Snail
2001-11-20 0:48 Yan, Noah
-- strict thread matches above, loose matches on Subject: below --
2001-11-15 8:38 janne
2001-11-15 9:05 ` janne
2001-11-15 17:44 ` Mike Galbraith
2001-11-16 0:14 ` janne
[not found] <200111141243.fAEChS915731@neosilicon.transmeta.com>
2001-11-14 16:34 ` Linus Torvalds
2001-11-19 18:01 ` Sebastian Dröge
2001-11-19 18:18 ` Simon Kirby
2001-11-14 12:44 Sebastian Dröge
2001-11-14 15:00 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox