public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* 2.4.9-ac15 painfully sluggish
@ 2001-09-25 15:36 Pau Aliagas
  2001-09-25 16:16 ` Rik van Riel
  0 siblings, 1 reply; 15+ messages in thread
From: Pau Aliagas @ 2001-09-25 15:36 UTC (permalink / raw)
  To: lkml; +Cc: Alan Cox


My impressions of the latest Alan's kernel are very bad, compared to
2.4.9-ac10 which I'm happily running in a Pentium III laptop with 128Mb
RAM and 400Mb swap.

It hardly touches swap (not at least in top) but the IDE disk never stops.
If I run setiathome it's absolutely impossible to do anything at all.
Large applications that take seconds to start now take minutes!!

Just my 2c
Pau


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 15:36 2.4.9-ac15 painfully sluggish Pau Aliagas
@ 2001-09-25 16:16 ` Rik van Riel
  2001-09-25 16:25   ` Pau Aliagas
  0 siblings, 1 reply; 15+ messages in thread
From: Rik van Riel @ 2001-09-25 16:16 UTC (permalink / raw)
  To: Pau Aliagas; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Pau Aliagas wrote:

> My impressions of the latest Alan's kernel are very bad, compared to
> 2.4.9-ac10 which I'm happily running in a Pentium III laptop with 128Mb
> RAM and 400Mb swap.
>
> It hardly touches swap (not at least in top) but the IDE disk never stops.
> If I run setiathome it's absolutely impossible to do anything at all.
> Large applications that take seconds to start now take minutes!!

Interesting, the VM changes done to -ac15 seem to have
improved performance for all reports I've received up
to now.

Could you give me some info on how much memory is being
used by the various caches (first lines of top) and maybe
a few lines of vmstat output ?

Lets try to fix this problem...

regards,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 16:16 ` Rik van Riel
@ 2001-09-25 16:25   ` Pau Aliagas
  2001-09-25 16:37     ` Rik van Riel
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
  0 siblings, 2 replies; 15+ messages in thread
From: Pau Aliagas @ 2001-09-25 16:25 UTC (permalink / raw)
  To: Rik van Riel; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Rik van Riel wrote:

> Could you give me some info on how much memory is being
> used by the various caches (first lines of top) and maybe
> a few lines of vmstat output ?

Only setiathome was really running whilst the rest of processes just where
in D state; the IDE disk never seemed to stop.

Once I stopped the CPU hog (that's seti) kapm-idled ained the CPU, but
again the swap was not being used but for a few Kb (about 2M maximum).

The problem seems to be related in pages not moved to swap but being
discarded somehow and reread later on.... just a guess.

If you need any debugging just tell me what and I'll give it a try.

Pau


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 16:25   ` Pau Aliagas
@ 2001-09-25 16:37     ` Rik van Riel
  2001-09-25 17:04       ` Pau Aliagas
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
  1 sibling, 1 reply; 15+ messages in thread
From: Rik van Riel @ 2001-09-25 16:37 UTC (permalink / raw)
  To: Pau Aliagas; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Pau Aliagas wrote:
> On Tue, 25 Sep 2001, Rik van Riel wrote:
>
> > Could you give me some info on how much memory is being
> > used by the various caches (first lines of top) and maybe
> > a few lines of vmstat output ?

> If you need any debugging just tell me what and I'll give it a try.

Could you send me one screen's worth of output from top
and 5 lines from 'vmstat -a 5' ?

thanks,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 16:37     ` Rik van Riel
@ 2001-09-25 17:04       ` Pau Aliagas
  2001-09-25 17:31         ` Rik van Riel
  0 siblings, 1 reply; 15+ messages in thread
From: Pau Aliagas @ 2001-09-25 17:04 UTC (permalink / raw)
  To: Rik van Riel; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Rik van Riel wrote:

> Could you send me one screen's worth of output from top
> and 5 lines from 'vmstat -a 5' ?

This is what happens only trying to start evolution for the first time,
nothing else. With ac10 works perfectly. With ac15 it takes minutes, at
least 4.

It was even worst the first time I run the ac15 kernel when updatedb run
the find in all the directories.

Now I can hardly write this email with pine.

[pau@pau pau]$ vmstat -n 5
   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 2  6  1   2392   3040    476   9644   5   7   585    17  170   479  85   6   9
 2 12  0   2520   2800    408   9028  25  27  1425    35  208   184  92   6   2
 2  4  0   2596   2812    408   9500  15  22  1174    24  204   304  79   6  16
 4  5  1   2316   2800    432   9044  47   5  1150     5  227   420  85   7   8
 2  4  0   2344   2848    412   8988  30  18  1210    26  238   364  77  11  12
 3  3  0   2480   2800    416   8916  15  27  1050    30  188   223  93   6   2
 1  3  0   2520   3832    436   8688  22  24  1110    26  195   351  85   5   9
 2  9  0   2628   2800    420   9808   4  31  1342    32  191   322  90   8   3
 1 10  0   2616   2800    412  10032  33  26  1459    26  241   244  94   6   0
 3  7  1   2584   2812    420  10476  10  10  1494    10  206   221  53   6  41
 1  7  0   2440   2780    468   9944  49   0  1102     0  187   228  95   5   0

  6:58pm  up 14 min,  5 users,  load average: 7,68, 6,21, 3,22
112 processes: 110 sleeping, 2 running, 0 zombie, 0 stopped
CPU states:  2,7% user,  5,2% system, 92,0% nice,  0,0% idle
Mem:   127072K av,  124272K used,    2800K free,     564K shrd,     440K buff
Swap:  401536K av,    2132K used,  399404K free                    9368K cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 1008 setiatho  19  19 14796  14M   152 R N  92,4 11,6  11:37 setiathome
    4 root       9   0     0    0     0 SW    3,3  0,0   0:20 kswapd
 1130 pau        9   0  1916 1916   512 S     1,2  1,5   0:02 sawfish
 1516 root      12   0   568  568   324 R     1,2  0,4   0:05 top
 1045 root       9   0 25712  16M   616 S     0,9 13,5   0:21 X
 1275 pau        9   0  1376 1376   540 D     0,3  1,0   0:13 deskguide_apple
 1403 pau        9   0  9512 9512   340 S     0,3  7,4   0:04 galeon-bin
    1 root       8   0   100  100    28 S     0,0  0,0   0:04 init
    2 root       9   0     0    0     0 SW    0,0  0,0   0:00 keventd
    3 root      19  19     0    0     0 SWN   0,0  0,0   0:00 ksoftirqd_CPU0
    5 root       9   0     0    0     0 SW    0,0  0,0   0:00 kreclaimd
    6 root       9   0     0    0     0 SW    0,0  0,0   0:00 bdflush
    7 root       9   0     0    0     0 SW    0,0  0,0   0:00 kupdated
   89 root       9   0     0    0     0 SW    0,0  0,0   0:00 khubd
  171 root      -1 -20     0    0     0 SW<   0,0  0,0   0:00 mdrecoveryd
  536 root       9   0   100  100     0 S     0,0  0,0   0:00 syslogd
  541 root       9   0   616  616     0 S     0,0  0,4   0:00 klogd
  553 rpc        9   0    92   92     0 S     0,0  0,0   0:00 portmap
  633 root       9   0  1968 1968  1712 S     0,0  1,5   0:00 ntpd
  681 root       9   0   132  132    28 S     0,0  0,1   0:00 automount
  693 daemon     9   0    76   76     0 S     0,0  0,0   0:00 atd
  708 named      9   0  1756 1756    92 S     0,0  1,3   0:00 named
  712 named      9   0  1756 1756    92 S     0,0  1,3   0:00 named
  713 named      9   0  1756 1756    92 S     0,0  1,3   0:00 named
  714 named      9   0  1756 1756    92 S     0,0  1,3   0:00 named
  715 named      9   0  1756 1756    92 S     0,0  1,3   0:00 named
  724 root       9   0   200  200     0 S     0,0  0,1   0:00 sshd
  738 root       9   0   216  200     0 S     0,0  0,1   0:00 xinetd
  760 lp         9   0   152  152     0 S     0,0  0,1   0:00 lpd
  775 root       9   0   180  180     0 S     0,0  0,1   0:00 safe_mysqld
  803 root       9   0   140  140    56 S     0,0  0,1   0:00 noflushd
  821 mysql      9   0  2700 2700    16 S     0,0  2,1   0:00 mysqld
  827 mysql      9   0  2700 2700    16 S     0,0  2,1   0:00 mysqld
  828 mysql      9   0  2700 2700    16 S     0,0  2,1   0:00 mysqld
  836 root       9   0   580  580    40 S     0,0  0,4   0:00 sendmail
  847 mysql      9   0  2700 2700    16 S     0,0  2,1   0:00 mysqld
  879 junkbust   8   0   524  524     0 S     0,0  0,4   0:00 junkbuster
  892 root       9   0    84   84    12 S     0,0  0,0   0:00 gpm
  904 root       8   0   152  152    44 S     0,0  0,1   0:00 crond
  944 xfs        9   0  3224 3224     0 S     0,0  2,5   0:00 xfs
  969 root       9   0    80   80     0 S     0,0  0,0   0:00 rhnsd
 1019 root       9   0     0    0     0 SW    0,0  0,0   0:00 kapm-idled
 1031 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1032 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1033 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1034 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1035 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1036 root       9   0    64   64     0 S     0,0  0,0   0:00 mingetty
 1037 root       9   0   268  268   140 S     0,0  0,2   0:00 gdm


Pau


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 17:04       ` Pau Aliagas
@ 2001-09-25 17:31         ` Rik van Riel
  2001-09-25 18:01           ` Pau Aliagas
  0 siblings, 1 reply; 15+ messages in thread
From: Rik van Riel @ 2001-09-25 17:31 UTC (permalink / raw)
  To: Pau Aliagas; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Pau Aliagas wrote:
> On Tue, 25 Sep 2001, Rik van Riel wrote:
>
> > Could you send me one screen's worth of output from top
> > and 5 lines from 'vmstat -a 5' ?

> [pau@pau pau]$ vmstat -n 5
>    procs                      memory    swap          io     system         cpu
>  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
>  2  6  1   2392   3040    476   9644   5   7   585    17  170   479  85   6   9
>  2 12  0   2520   2800    408   9028  25  27  1425    35  208   184  92   6   2
>  2  4  0   2596   2812    408   9500  15  22  1174    24  204   304  79   6  16

Wow, these are a LOT of reads and a very small cache.

I guess the old page_launder() might be misbehaving in
this case, I've just finished the patch for a new one
(which seems to work nicely in the limited tests I've
thrown at it).

>   6:58pm  up 14 min,  5 users,  load average: 7,68, 6,21, 3,22
> 112 processes: 110 sleeping, 2 running, 0 zombie, 0 stopped
> CPU states:  2,7% user,  5,2% system, 92,0% nice,  0,0% idle
> Mem:   127072K av,  124272K used,    2800K free,     564K shrd,     440K buff
> Swap:  401536K av,    2132K used,  399404K free                    9368K cached

Here things get "interesting" ...

total memory:                               127 MB
The memory taken by all your processes:      55 MB
buffer + cache:                              10 MB
-----------------------------------------------------
missing:                                     62 MB

It would be interesting to get a listing of /proc/slabinfo,
in particular those lines which have a large number in the
4th or 5th column...


regards,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 17:31         ` Rik van Riel
@ 2001-09-25 18:01           ` Pau Aliagas
  0 siblings, 0 replies; 15+ messages in thread
From: Pau Aliagas @ 2001-09-25 18:01 UTC (permalink / raw)
  To: Rik van Riel; +Cc: lkml, Alan Cox

On Tue, 25 Sep 2001, Rik van Riel wrote:

> > [pau@pau pau]$ vmstat -n 5
> >    procs                      memory    swap          io     system         cpu
> >  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
> >  2  6  1   2392   3040    476   9644   5   7   585    17  170   479  85   6   9
> >  2 12  0   2520   2800    408   9028  25  27  1425    35  208   184  92   6   2
> >  2  4  0   2596   2812    408   9500  15  22  1174    24  204   304  79   6  16
>
> Wow, these are a LOT of reads and a very small cache.

> Here things get "interesting" ...
>
> total memory:                               127 MB
> The memory taken by all your processes:      55 MB
> buffer + cache:                              10 MB
> -----------------------------------------------------
> missing:                                     62 MB
>
> It would be interesting to get a listing of /proc/slabinfo,
> in particular those lines which have a large number in the
> 4th or 5th column...

1. Before starting evolution:
slabinfo - version: 1.1
kmem_cache            59     68    112    2    2    1
ip_conntrack          11     33    352    3    3    1
ip_fib_hash           12    113     32    1    1    1
uhci_urb_priv          1     67     56    1    1    1
ip_mrt_cache           0      0     96    0    0    1
tcp_tw_bucket          1     40     96    1    1    1
tcp_bind_bucket       12    113     32    1    1    1
tcp_open_request       0      0     64    0    0    1
inet_peer_cache        1     59     64    1    1    1
ip_dst_cache          41    100    192    5    5    1
arp_cache              2     30    128    1    1    1
blkdev_requests      512    520     96   13   13    1
dnotify cache          0      0     20    0    0    1
file lock cache        4     42     92    1    1    1
fasync cache           1    202     16    1    1    1
uid_cache              7    113     32    1    1    1
skbuff_head_cache    129    240    160   10   10    1
sock                 300    333    832   37   37    2
sigqueue            1024   1044    132   36   36    1
kiobuf                 0      0   8768    0    0    4
cdev_cache            23    118     64    2    2    1
bdev_cache             6     59     64    1    1    1
mnt_cache             16     59     64    1    1    1
inode_cache         1036   2178    416  242  242    1
dentry_cache        1019   2730    128   91   91    1
dquot                  0      0    128    0    0    1
filp                3116   3120     96   78   78    1
names_cache            0      1   4096    0    1    1
buffer_head         3816   5240     96  131  131    1
mm_struct             83     96    160    4    4    1
vm_area_struct      4536   4897     64   83   83    1
fs_cache              82    118     64    2    2    1
files_cache           82     90    416   10   10    1
signal_act            87     90   1312   30   30    1
size-131072(DMA)       0      0 131072    0    0   32
size-131072            0      0 131072    0    0   32
size-65536(DMA)        0      0  65536    0    0   16
size-65536             0      0  65536    0    0   16
size-32768(DMA)        0      0  32768    0    0    8
size-32768             1      1  32768    1    1    8
size-16384(DMA)        0      0  16384    0    0    4
size-16384             0      0  16384    0    0    4
size-8192(DMA)         0      0   8192    0    0    2
size-8192              6      6   8192    6    6    2
size-4096(DMA)         0      0   4096    0    0    1
size-4096             33     33   4096   33   33    1
size-2048(DMA)         0      0   2048    0    0    1
size-2048              7      8   2048    4    4    1
size-1024(DMA)         0      0   1024    0    0    1
size-1024             57     60   1024   15   15    1
size-512(DMA)          0      0    512    0    0    1
size-512              16     24    512    2    3    1
size-256(DMA)          0      0    256    0    0    1
size-256              21     30    256    2    2    1
size-128(DMA)          2     30    128    1    1    1
size-128             462    510    128   17   17    1
size-64(DMA)           0      0     64    0    0    1
size-64              242    295     64    5    5    1
size-32(DMA)           2    113     32    1    1    1
size-32              429   1017     32    9    9    1
evol

2.while starting evolution (evolution-pine-importer looks through MANY
pine mboxes to see if it can import them, more or less like updatedb that
ckecks all the files in the disc, so it looks like something related to
reading lots of files)

slabinfo - version: 1.1
kmem_cache            59     68    112    2    2    1
ip_conntrack           7     33    352    3    3    1
ip_fib_hash           12    113     32    1    1    1
uhci_urb_priv          1     67     56    1    1    1
ip_mrt_cache           0      0     96    0    0    1
tcp_tw_bucket          1     40     96    1    1    1
tcp_bind_bucket       12    113     32    1    1    1
tcp_open_request       0      0     64    0    0    1
inet_peer_cache        1     59     64    1    1    1
ip_dst_cache          55    100    192    5    5    1
arp_cache              2     30    128    1    1    1
blkdev_requests      512    520     96   13   13    1
dnotify cache          0      0     20    0    0    1
file lock cache        4     42     92    1    1    1
fasync cache           1    202     16    1    1    1
uid_cache              7    113     32    1    1    1
skbuff_head_cache    161    240    160   10   10    1
sock                 324    324    832   36   36    2
sigqueue            1024   1044    132   36   36    1
kiobuf                 0      0   8768    0    0    4
cdev_cache            21    118     64    2    2    1
bdev_cache             6     59     64    1    1    1
mnt_cache             16     59     64    1    1    1
inode_cache          994   2178    416  242  242    1
dentry_cache         984   2730    128   91   91    1
dquot                  0      0    128    0    0    1
filp                3116   3120     96   78   78    1
names_cache            1      4   4096    1    4    1
buffer_head         2229   3440     96   86   86    1
mm_struct             86     96    160    4    4    1
vm_area_struct      4929   4956     64   84   84    1
fs_cache              85    118     64    2    2    1
files_cache           85     90    416   10   10    1
signal_act            90     93   1312   31   31    1
size-131072(DMA)       0      0 131072    0    0   32
size-131072            0      0 131072    0    0   32
size-65536(DMA)        0      0  65536    0    0   16
size-65536             0      0  65536    0    0   16
size-32768(DMA)        0      0  32768    0    0    8
size-32768             1      2  32768    1    2    8
size-16384(DMA)        0      0  16384    0    0    4
size-16384             0      0  16384    0    0    4
size-8192(DMA)         0      0   8192    0    0    2
size-8192              6      6   8192    6    6    2
size-4096(DMA)         0      0   4096    0    0    1
size-4096             37     38   4096   37   38    1
size-2048(DMA)         0      0   2048    0    0    1
size-2048              9     12   2048    5    6    1
size-1024(DMA)         0      0   1024    0    0    1
size-1024             57     60   1024   15   15    1
size-512(DMA)          0      0    512    0    0    1
size-512              20     32    512    4    4    1
size-256(DMA)          0      0    256    0    0    1
size-256              39     60    256    4    4    1
size-128(DMA)          2     30    128    1    1    1
size-128             503    570    128   19   19    1
size-64(DMA)           0      0     64    0    0    1
size-64              248    295     64    5    5    1
size-32(DMA)           2    113     32    1    1    1
size-32              442   1017     32    9    9    1

BTW, It has taken me more than 10 minutes to write this email and
evolution hasn't yet started.

3.last slabinfo (the disk still "plays")

slabinfo - version: 1.1
kmem_cache            59     68    112    2    2    1
ip_conntrack          10     22    352    2    2    1
ip_fib_hash           12    113     32    1    1    1
uhci_urb_priv          1     67     56    1    1    1
ip_mrt_cache           0      0     96    0    0    1
tcp_tw_bucket          1     40     96    1    1    1
tcp_bind_bucket       11    113     32    1    1    1
tcp_open_request       0      0     64    0    0    1
inet_peer_cache        2     59     64    1    1    1
ip_dst_cache          45    100    192    5    5    1
arp_cache              2     30    128    1    1    1
blkdev_requests      512    520     96   13   13    1
dnotify cache          0      0     20    0    0    1
file lock cache        9     42     92    1    1    1
fasync cache           1    202     16    1    1    1
uid_cache              7    113     32    1    1    1
skbuff_head_cache    129    264    160   11   11    1
sock                 344    351    832   39   39    2
sigqueue             433    435    132   15   15    1
kiobuf                 0      0   8768    0    0    4
cdev_cache            21    118     64    2    2    1
bdev_cache             6     59     64    1    1    1
mnt_cache             16     59     64    1    1    1
inode_cache         1023   2178    416  242  242    1
dentry_cache        1012   2730    128   91   91    1
dquot                  0      0    128    0    0    1
filp                3116   3120     96   78   78    1
names_cache            0      3   4096    0    3    1
buffer_head         2072   2520     96   63   63    1
mm_struct             84     96    160    4    4    1
vm_area_struct      5156   5251     64   89   89    1
fs_cache              83    118     64    2    2    1
files_cache           83     90    416   10   10    1
signal_act            88     93   1312   31   31    1
size-131072(DMA)       0      0 131072    0    0   32
size-131072            0      0 131072    0    0   32
size-65536(DMA)        0      0  65536    0    0   16
size-65536             0      0  65536    0    0   16
size-32768(DMA)        0      0  32768    0    0    8
size-32768             1      1  32768    1    1    8
size-16384(DMA)        0      0  16384    0    0    4
size-16384             0      0  16384    0    0    4
size-8192(DMA)         0      0   8192    0    0    2
size-8192              6      6   8192    6    6    2
size-4096(DMA)         0      0   4096    0    0    1
size-4096             33     34   4096   33   34    1
size-2048(DMA)         0      0   2048    0    0    1
size-2048              7     10   2048    4    5    1
size-1024(DMA)         0      0   1024    0    0    1
size-1024             53     56   1024   14   14    1
size-512(DMA)          0      0    512    0    0    1
size-512              16     32    512    2    4    1
size-256(DMA)          0      0    256    0    0    1
size-256              21     30    256    2    2    1
size-128(DMA)          2     30    128    1    1    1
size-128             461    510    128   17   17    1
size-64(DMA)           0      0     64    0    0    1
size-64              249    295     64    5    5    1
size-32(DMA)           2    113     32    1    1    1
size-32              440   1017     32    9    9    1

While the disc is reading:
   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 3 11  0   2492   2800    436   8300   9   8   610    14  160   308  89   6   6
 2  7  0   2412   3568    408   8232  20   2  1538     4  222   264  76   9  15
 1 10  0   2664   2800    432   9128   3  30  1388    40  212   201  89   8   3
 2  7  0   2616   3020    492   9360  26  25  1457    25  204   198  84   6  10
 4 10  0   2648   2800    420   8780   8   0  1377     6  201   233  88   7   4
 1 10  0   2528   2800    440   8736   9   0  1248     0  220   280  82   6  12
 1  6  1   2520   2800    448   8688   1   3  1350    30  212   230  75  12  13



Pau


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 16:25   ` Pau Aliagas
  2001-09-25 16:37     ` Rik van Riel
@ 2001-09-25 19:32     ` Rik van Riel
  2001-09-25 20:42       ` Jasper Spaans
                         ` (3 more replies)
  1 sibling, 4 replies; 15+ messages in thread
From: Rik van Riel @ 2001-09-25 19:32 UTC (permalink / raw)
  To: Pau Aliagas; +Cc: lkml, Alan Cox, Roberto Orenstein, Francois Romieu

On Tue, 25 Sep 2001, Pau Aliagas wrote:

> The problem seems to be related in pages not moved to swap but
> being discarded somehow and reread later on.... just a guess.

I've made a small patch to 2.4.9-ac15 which should make
page_launder() smoother, make some (very minor) tweaks
to page aging and updates various comments in vmscan.c

It's below this email and at:

http://www.surriel.com/patches/2.4/2.4.9-ac15-age+launder

Since I failed to break 2.4.9-ac15 with this patch following
the instructions given to me by others, it would be nice to
know if the thing can still break on your machines.

Please test,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/




--- linux-2.4.9-ac15/mm/vmscan.c.orig	Mon Sep 24 17:30:48 2001
+++ linux-2.4.9-ac15/mm/vmscan.c	Tue Sep 25 13:33:16 2001
@@ -28,19 +28,12 @@

 static inline void age_page_up(struct page *page)
 {
-	unsigned long age = page->age;
-	age += PAGE_AGE_ADV;
-	if (age > PAGE_AGE_MAX)
-		age = PAGE_AGE_MAX;
-	page->age = age;
+	page->age = min((int) (page->age + PAGE_AGE_ADV), PAGE_AGE_MAX);
 }

 static inline void age_page_down(struct page *page)
 {
-	unsigned long age = page->age;
-	if (age > 0)
-		age -= PAGE_AGE_DECL;
-	page->age = age;
+	page->age -= min(PAGE_AGE_DECL, (int)page->age);
 }

 /*
@@ -108,6 +101,23 @@
 	pte_t pte;
 	swp_entry_t entry;

+	/* Don't look at this pte if it's been accessed recently. */
+	if (ptep_test_and_clear_young(page_table)) {
+		age_page_up(page);
+		return;
+	}
+
+	/*
+	 * If the page is on the active list, page aging is done in
+	 * refill_inactive_scan(), anonymous pages are aged here.
+	 * This is done so heavily shared pages (think libc.so)
+	 * don't get punished heavily while they are still in use.
+	 * The alternative would be to put anonymous pages on the
+	 * active list too, but that increases complexity (for now).
+	 */
+	if (!PageActive(page))
+		age_page_down(page);
+
 	/*
 	 * If we have plenty inactive pages on this
 	 * zone, skip it.
@@ -133,23 +143,6 @@
 	pte = ptep_get_and_clear(page_table);
 	flush_tlb_page(vma, address);

-	/* Don't look at this pte if it's been accessed recently. */
-	if (ptep_test_and_clear_young(page_table)) {
-		age_page_up(page);
-		return;
-	}
-
-	/*
-	 * If the page is on the active list, page aging is done in
-	 * refill_inactive_scan(), anonymous pages are aged here.
-	 * This is done so heavily shared pages (think libc.so)
-	 * don't get punished heavily while they are still in use.
-	 * The alternative would be to put anonymous pages on the
-	 * active list too, but that increases complexity (for now).
-	 */
-	if (!PageActive(page))
-		age_page_down(page);
-
 	/*
 	 * Is the page already in the swap cache? If so, then
 	 * we can just drop our reference to it without doing
@@ -447,6 +440,7 @@
 				page_count(page) > (1 + !!page->buffers)) {
 			del_page_from_inactive_clean_list(page);
 			add_page_to_active_list(page);
+			page->age = max((int)page->age, PAGE_AGE_START);
 			continue;
 		}

@@ -536,34 +530,56 @@
  * @gfp_mask: what operations we are allowed to do
  * @sync: are we allowed to do synchronous IO in emergencies ?
  *
- * When this function is called, we are most likely low on free +
- * inactive_clean pages. Since we want to refill those pages as
- * soon as possible, we'll make two loops over the inactive list,
- * one to move the already cleaned pages to the inactive_clean lists
- * and one to (often asynchronously) clean the dirty inactive pages.
+ * This function is called when we are low on free / inactive_clean
+ * pages, its purpose is to refill the free/clean list as efficiently
+ * as possible.
  *
- * In situations where kswapd cannot keep up, user processes will
- * end up calling this function. Since the user process needs to
- * have a page before it can continue with its allocation, we'll
- * do synchronous page flushing in that case.
+ * This means we do writes asynchronously as long as possible and will
+ * only sleep on IO when we don't have another option. Since writeouts
+ * cause disk seeks and make read IO slower, we skip writes alltogether
+ * when the amount of dirty pages is small.
  *
  * This code is heavily inspired by the FreeBSD source code. Thanks
  * go out to Matthew Dillon.
  */
-#define MAX_LAUNDER 		(4 * (1 << page_cluster))
-#define CAN_DO_FS		(gfp_mask & __GFP_FS)
-#define CAN_DO_IO		(gfp_mask & __GFP_IO)
+#define	CAN_DO_FS	((gfp_mask & __GFP_FS) && should_write)
+#define	WRITE_LOW_WATER		5
+#define	WRITE_HIGH_WATER	10
 int page_launder(int gfp_mask, int sync)
 {
-	int launder_loop, maxscan, cleaned_pages, maxlaunder;
+	int maxscan, cleaned_pages, failed_pages, total;
 	struct list_head * page_lru;
 	struct page * page;

-	launder_loop = 0;
-	maxlaunder = 0;
+	/*
+	 * This flag determines if we should do writeouts of dirty data
+	 * or not.  When more than WRITE_HIGH_WATER percentage of the
+	 * pages we process would need to be written out we set this flag
+	 * and will do writeout, the flag is cleared once we go below
+	 * WRITE_LOW_WATER.  Note that only pages we actually process
+	 * get counted, ie. pages where we make it beyond the TryLock.
+	 *
+	 * XXX: These flags still need tuning.
+	 */
+	static int should_write = 0;
+
 	cleaned_pages = 0;
+	failed_pages = 0;
+
+	/*
+	 * The gfp_mask tells try_to_free_buffers() below if it should
+	 * wait do IO or may be allowed to wait on IO synchronously.
+	 *
+	 * Note that syncronous IO only happens when a page has not been
+	 * written out yet when we see it for a second time, this is done
+	 * through magic in try_to_free_buffers().
+	 */
+	if (!should_write)
+		gfp_mask &= ~(__GFP_WAIT | __GFP_IO);
+	else if (!sync)
+		gfp_mask &= ~__GFP_WAIT;

-dirty_page_rescan:
+	/* The main launder loop. */
 	spin_lock(&pagemap_lru_lock);
 	maxscan = nr_inactive_dirty_pages;
 	while ((page_lru = inactive_dirty_list.prev) != &inactive_dirty_list &&
@@ -580,26 +596,36 @@
 		}

 		/*
-		 * If the page is or was in use, we move it to the active
-		 * list. Note that pages in use by processes can easily
-		 * end up here to be unmapped later, but we just don't
-		 * want them clogging up the inactive list.
+		 * The page is in active use or really unfreeable. Move to
+		 * the active list and adjust the page age if needed.
 		 */
-		if ((PageReferenced(page) || page->age > 0 ||
-				page_count(page) > (1 + !!page->buffers) ||
-				page_ramdisk(page)) && !PageLocked(page)) {
+		if (PageReferenced(page) || page->age || page_ramdisk(page)) {
 			del_page_from_inactive_dirty_list(page);
 			add_page_to_active_list(page);
+			page->age = max((int)page->age, PAGE_AGE_START);
 			continue;
 		}

 		/*
-		 * If we have plenty free pages on a zone:
+		 * The page is still in the page tables of some process,
+		 * move it to the active list but leave page age at 0;
+		 * either swap_out() will make it freeable soon or it is
+		 * mlock()ed...
 		 *
-		 * 1) we avoid a writeout for that page if its dirty.
-		 * 2) if its a buffercache page, and not a pagecache
-		 * one, we skip it since we cannot move it to the
-		 * inactive clean list --- we have to free it.
+		 * The !PageLocked() test is to protect us from ourselves,
+		 * see the code around the writepage() call.
+		 */
+		if ((page_count(page) > (1 + !!page->buffers)) &&
+						!PageLocked(page)) {
+			del_page_from_inactive_dirty_list(page);
+			add_page_to_active_list(page);
+			continue;
+		}
+
+		/*
+		 * If this zone has plenty of pages free, don't spend time
+		 * on cleaning it but only move clean pages out of the way
+		 * so we won't have to scan those again.
 		 */
 		if (zone_free_plenty(page->zone)) {
 			if (!page->mapping || page_dirty(page)) {
@@ -633,11 +659,12 @@
 			if (!writepage)
 				goto page_active;

-			/* First time through? Move it to the back of the list */
-			if (!launder_loop || !CAN_DO_FS) {
+			/* Can't do it? Move it to the back of the list. */
+			if (!CAN_DO_FS) {
 				list_del(page_lru);
 				list_add(page_lru, &inactive_dirty_list);
 				UnlockPage(page);
+				failed_pages++;
 				continue;
 			}

@@ -664,9 +691,7 @@
 		 * buffer pages
 		 */
 		if (page->buffers) {
-			unsigned int buffer_mask;
 			int clearedbuf;
-			int freed_page = 0;
 			/*
 			 * Since we might be doing disk IO, we have to
 			 * drop the spinlock and take an extra reference
@@ -676,16 +701,8 @@
 			page_cache_get(page);
 			spin_unlock(&pagemap_lru_lock);

-			/* Will we do (asynchronous) IO? */
-			if (launder_loop && maxlaunder == 0 && sync)
-				buffer_mask = gfp_mask;				/* Do as much as we can */
-			else if (launder_loop && maxlaunder-- > 0)
-				buffer_mask = gfp_mask & ~__GFP_WAIT;			/* Don't wait, async write-out */
-			else
-				buffer_mask = gfp_mask & ~(__GFP_WAIT | __GFP_IO);	/* Don't even start IO */
-
-			/* Try to free metadata underlying the page. */
-			clearedbuf = try_to_release_page(page, buffer_mask);
+			/* Try to free the page buffers. */
+			clearedbuf = try_to_release_page(page, gfp_mask);

 			/*
 			 * Re-take the spinlock. Note that we cannot
@@ -697,11 +714,11 @@
 			/* The buffers were not freed. */
 			if (!clearedbuf) {
 				add_page_to_inactive_dirty_list(page);
+				failed_pages++;

 			/* The page was only in the buffer cache. */
 			} else if (!page->mapping) {
 				atomic_dec(&buffermem_pages);
-				freed_page = 1;
 				cleaned_pages++;

 			/* The page has more users besides the cache and us. */
@@ -722,13 +739,6 @@
 			UnlockPage(page);
 			page_cache_release(page);

-			/*
-			 * If we're freeing buffer cache pages, stop when
-			 * we've got enough free memory.
-			 */
-			if (freed_page && !free_shortage())
-				break;
-
 			continue;
 		} else if (page->mapping && !PageDirty(page)) {
 			/*
@@ -750,33 +760,22 @@
 			 */
 			del_page_from_inactive_dirty_list(page);
 			add_page_to_active_list(page);
+			page->age = max((int)page->age, PAGE_AGE_START);
 			UnlockPage(page);
 		}
 	}
 	spin_unlock(&pagemap_lru_lock);

 	/*
-	 * If we don't have enough free pages, we loop back once
-	 * to queue the dirty pages for writeout. When we were called
-	 * by a user process (that /needs/ a free page) and we didn't
-	 * free anything yet, we wait synchronously on the writeout of
-	 * MAX_SYNC_LAUNDER pages.
-	 *
-	 * We also wake up bdflush, since bdflush should, under most
-	 * loads, flush out the dirty pages before we have to wait on
-	 * IO.
+	 * Set the should_write flag, for the next callers of page_launder.
+	 * If we go below the low watermark we stop the writeout of dirty
+	 * pages, writeout is started when we get above the high watermark.
 	 */
-	if (CAN_DO_IO && !launder_loop && free_shortage()) {
-		launder_loop = 1;
-		/* If we cleaned pages, never do synchronous IO. */
-		if (cleaned_pages)
-			sync = 0;
-		/* We only do a few "out of order" flushes. */
-		maxlaunder = MAX_LAUNDER;
-		/* Kflushd takes care of the rest. */
-		wakeup_bdflush(0);
-		goto dirty_page_rescan;
-	}
+	total = failed_pages + cleaned_pages;
+	if (should_write && failed_pages * 100 < WRITE_LOW_WATER * total)
+		should_write = 0;
+	else if (!should_write && failed_pages * 100 > WRITE_HIGH_WATER * total)
+		should_write = 1;

 	/* Return the number of pages moved to the inactive_clean list. */
 	return cleaned_pages;
@@ -998,27 +997,26 @@
 	return progress;
 }

-
-
+/*
+ * Worker function for kswapd and try_to_free_pages, we get
+ * called whenever there is a shortage of free/inactive_clean
+ * pages.
+ *
+ * This function will also move pages to the inactive list,
+ * if needed.
+ */
 static int do_try_to_free_pages(unsigned int gfp_mask, int user)
 {
 	int ret = 0;

 	/*
-	 * If we're low on free pages, move pages from the
-	 * inactive_dirty list to the inactive_clean list.
-	 *
-	 * Usually bdflush will have pre-cleaned the pages
-	 * before we get around to moving them to the other
-	 * list, so this is a relatively cheap operation.
+	 * Eat memory from filesystem page cache, buffer cache,
+	 * dentry, inode and filesystem quota caches.
 	 */
-
-	if (free_shortage()) {
-		ret += page_launder(gfp_mask, user);
-		shrink_dcache_memory(0, gfp_mask);
-		shrink_icache_memory(0, gfp_mask);
-		shrink_dqcache_memory(DEF_PRIORITY, gfp_mask);
-	}
+	ret += page_launder(gfp_mask, user);
+	shrink_dcache_memory(0, gfp_mask);
+	shrink_icache_memory(0, gfp_mask);
+	shrink_dqcache_memory(DEF_PRIORITY, gfp_mask);

 	/*
 	 * If needed, we move pages from the active list
@@ -1028,7 +1026,7 @@
 		ret += refill_inactive(gfp_mask);

 	/*
-	 * Reclaim unused slab cache if memory is low.
+	 * Reclaim unused slab cache memory.
 	 */
 	kmem_cache_reap(gfp_mask);

@@ -1080,7 +1078,7 @@
 		static long recalc = 0;

 		/* If needed, try to free some memory. */
-		if (inactive_shortage() || free_shortage())
+		if (free_shortage())
 			do_try_to_free_pages(GFP_KSWAPD, 0);

 		/* Once a second ... */
@@ -1090,8 +1088,6 @@
 			/* Do background page aging. */
 			refill_inactive_scan(DEF_PRIORITY);
 		}
-
-		run_task_queue(&tq_disk);

 		/*
 		 * We go to sleep if either the free page shortage


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
@ 2001-09-25 20:42       ` Jasper Spaans
  2001-09-25 20:52         ` Rik van Riel
  2001-09-26  7:36       ` Pau Aliagas
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Jasper Spaans @ 2001-09-25 20:42 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Pau Aliagas, lkml, Alan Cox, Roberto Orenstein, Francois Romieu

On Tue, Sep 25, 2001 at 04:32:13PM -0300, Rik van Riel wrote:

> > The problem seems to be related in pages not moved to swap but
> > being discarded somehow and reread later on.... just a guess.
> 
> I've made a small patch to 2.4.9-ac15 which should make
> page_launder() smoother, make some (very minor) tweaks
> to page aging and updates various comments in vmscan.c
> 
> It's below this email and at:
[snip]

With this -painfully applied- patch, all seems a lot better. However, it
still seems more sluggish than 2.4.9-ac12.

Haven't bothered testing -ac13 and -ac14.

Swapping still seems to be a problem though, as this kernel happily claims
several tens of MBs of swap, and then, out of the blue, starts writing pages
out in huge blocks.

The vmstat info just disappeared from my screen, while stressing the
vm-system by doing a large parallel build, however, it showed no paging
happening during long times (20-30 secs) and then suddenly writing 30 MB in
one go.

Doesn't seem very smooth to me, where can I get my refund? :)

(if you'd like more details, please ask, I can do some testing)

Regards,
-- 
  Q_.           Jasper Spaans <jasper@spaans.ds9a.nl>
 `~\            http://jsp.ds9a.nl/
Mr /\           Tel/Fax: +31-84-8749842
Zap             Move '.sig' for great justice!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 20:42       ` Jasper Spaans
@ 2001-09-25 20:52         ` Rik van Riel
  0 siblings, 0 replies; 15+ messages in thread
From: Rik van Riel @ 2001-09-25 20:52 UTC (permalink / raw)
  To: Jasper Spaans
  Cc: Pau Aliagas, lkml, Alan Cox, Roberto Orenstein, Francois Romieu

On Tue, 25 Sep 2001, Jasper Spaans wrote:

> (if you'd like more details, please ask, I can do some testing)

Some 'vmstat -a' output would be helpful, as well as a
screen of top.

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
  2001-09-25 20:42       ` Jasper Spaans
@ 2001-09-26  7:36       ` Pau Aliagas
  2001-09-26 13:54       ` [PATCH] " Roberto Orenstein
  2001-09-26 18:01       ` [PATCH] " Kent Borg
  3 siblings, 0 replies; 15+ messages in thread
From: Pau Aliagas @ 2001-09-26  7:36 UTC (permalink / raw)
  To: Rik van Riel; +Cc: lkml, Alan Cox, Roberto Orenstein, Francois Romieu

On Tue, 25 Sep 2001, Rik van Riel wrote:

> On Tue, 25 Sep 2001, Pau Aliagas wrote:
>
> I've made a small patch to 2.4.9-ac15 which should make
> page_launder() smoother, make some (very minor) tweaks
> to page aging and updates various comments in vmscan.c

Ok, things improve vastly and the system is usable again. I cannot
"notice" any difference with ac10 in terms of speed.

I attach the top and vmstat outputs while running updatedb (reads all
the files in the filesystems) and starting evolution for the first time
(touches all the mail folders, that's hundreths of files). It starts
smoothly in seconds.

  9:30am  up 52 min,  5 users,  load average: 2,83, 2,35, 1,87
125 processes: 121 sleeping, 4 running, 0 zombie, 0 stopped
CPU states: 16,2% user,  7,9% system, 75,8% nice,  0,0% idle
Mem:   127072K av,  123336K used,    3736K free,    2360K shrd,    7988K buff
Swap:  401536K av,   54092K used,  347444K free                   33952K cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 1008 setiatho  20  19 15348  14M   708 R N  76,2 11,4  38:28 setiathome
 1045 root       9   0 29740  12M  4076 S     7,3  9,6   9:05 X
 2238 pau        9   0  8876 8876  6764 D     4,1  6,9   0:00 evolution-mail
 1278 pau        9   0  4276 3776  3444 S     2,5  2,9   0:45 deskguide_apple
 2192 pau        9   0 16272  15M  6064 S     1,9 12,8   0:01 evolution
 1176 pau        9   0  1784 1456  1352 R     1,5  1,1   0:01 xscreensaver
    4 root       9   0     0    0     0 SW    0,9  0,0   0:02 kswapd
 2189 pau       13   0  1096 1096   840 R     0,9  0,8   0:00 top
 2258 pau        9   0  6024 6024  4632 S     0,9  4,7   0:00 evolution-pine-
 2198 pau        9   0  5616 5616  4360 D     0,7  4,4   0:00 wombat
 1127 pau        9   0  3424 3208  2032 S     0,3  2,5   0:03 sawfish
 1188 pau        9   0  5428 4972  4172 S     0,3  3,9   0:02 panel
 2265 pau        9   0  5372 5372  4216 S     0,3  4,2   0:00 evolution-gnome
  944 xfs        9   0  4748 2160  1768 D     0,1  1,6   0:01 xfs
 1198 pau        9   0  5232 3992  3472 S     0,1  3,1   0:02 gnome-terminal
 1944 root      19  19   716  676   496 R N   0,1  0,5   0:01 updatedb
 2093 pau        9   0   524  496   444 S     0,1  0,3   0:00 vmstat

   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 4  1  0  57552   3620  14280  42892   9  18    78    28  124   881  95   3   2
 1 11  0  54756   8220  14452  40896 386 115   614   163  244   560  96   4   0
 3  0  0  54660   7208  14720  41432 234   0   394     0  180   358  97   3   0
 5  0  0  54660   4952  15196  42508  74   0   381     0  162   295  98   2   0
 3  2  0  53856   5680  13244  42476 197   2   428    14  205   443  95   5   0
 2  4  0  53856   3668  13668  43212 170   0   395     6  180   399  97   3   0
 1  3  0  54004   5852  10324  42724 114   0   368     1  176   841  93   6   1
 4  0  0  54000   2800  10640  41908  19   0   314     2  151   764  95   5   0
 3  0  0  54024   6788   7568  39048   1   0   350     6  172   497  95   5   0
 1  2  0  54024   4656   7780  40192  78   0   347    11  163   524  96   4   0
 4  0  0  53972   2936   7944  38844 230   0   420     4  193  1370  94   6   0
 3  0  0  53972  12944   8420  39136  63   0   285    11  198  1262  96   4   0
 4  0  0  53972  11816   8836  39644  19   0   242    14  164   426  97   3   0
 4  0  0  53972  10532   9216  40252  31   0   222    17  215   834  96   4   0
 3  1  0  53972   8628   9504  40944  42   0   235     4  212   514  96   4   0
 2  1  0  53972   4040   9732  41636   0   0   173     2  210   798  95   5   0
 4  2  0  54160   3096   7188  40800  32   0   207    32  134  1502  87  13   0
 3  0  0  54332   3732   6412  35452  10   0   242    16  178   773  96   4   0
 4  0  0  54332   2800   7060  33908   0   0   232    16  204   506  96   4   0
 4  0  0  54072   4616   7096  34748   0 572   211   679  207   436  97   3   0
 3  2  0  54024   2800   7280  35364   0  12   244    18  216   726  97   3   0
 7  0  0  54020   3104   7644  34116  41   0   229     0  175   954  93   7   0
 3  1  0  54000   3744   7888  33852  62   0   374     0  193   722  97   3   0
 2  4  0  54092   4952   7988  33856   8   0   118   180  170  2934  91   9   0
 4  1  0  54036   2992   8024  34312  38   0   102   444  237  1684  92   8   0
 2  0  0  54036   3580   8216  35092   0   0   208     0  173   356  97   3   0
 3  0  0  55016   3820   8444  34912   3  67   293   136  165   279  90   2   8
 3  0  0  55016   2800   8724  34820   0   0   202    15  177   385  98   2   0
 4  0  0  55016   2948   9008  34704   0   0   258    14  175   372  98   2   0


Pau


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  2.4.9-ac15 painfully sluggish
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
  2001-09-25 20:42       ` Jasper Spaans
  2001-09-26  7:36       ` Pau Aliagas
@ 2001-09-26 13:54       ` Roberto Orenstein
  2001-09-26 18:01       ` [PATCH] " Kent Borg
  3 siblings, 0 replies; 15+ messages in thread
From: Roberto Orenstein @ 2001-09-26 13:54 UTC (permalink / raw)
  To: linux-kernel

Rik van Riel wrote:

> On Tue, 25 Sep 2001, Pau Aliagas wrote:
> 
> 
>> The problem seems to be related in pages not moved to swap but
>> being discarded somehow and reread later on.... just a guess.
> 
> 
> I've made a small patch to 2.4.9-ac15 which should make
> page_launder() smoother, make some (very minor) tweaks
> to page aging and updates various comments in vmscan.c
> 
> It's below this email and at:
> 
> http://www.surriel.com/patches/2.4/2.4.9-ac15-age+launder
> 
> Since I failed to break 2.4.9-ac15 with this patch following
> the instructions given to me by others, it would be nice to
> know if the thing can still break on your machines.
> 
> Please test,
> 

Test done, and it seems just fine.
The problem vanish away. Didn't trigger anymore, on my machine.

thanx

Roberto


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-25 19:32     ` [PATCH] " Rik van Riel
                         ` (2 preceding siblings ...)
  2001-09-26 13:54       ` [PATCH] " Roberto Orenstein
@ 2001-09-26 18:01       ` Kent Borg
  3 siblings, 0 replies; 15+ messages in thread
From: Kent Borg @ 2001-09-26 18:01 UTC (permalink / raw)
  To: Rik van Riel; +Cc: lkml

On Tue, Sep 25, 2001 at 04:32:13PM -0300, Rik van Riel wrote:
> I've made a small patch to 2.4.9-ac15 which should make
> page_launder() smoother, make some (very minor) tweaks
> to page aging and updates various comments in vmscan.c

Works for me.  I have the premption patch turned on.

On a 192 MB PIII laptop running at 500 MHz I have two X sessions
running, and have opened a zillion application windows on each,
including over a dozen Netscape windows, some Mozilla's, a couple
emacs's, a kernel compile, Staroffice.  About every app I can
immediately think of.  I haven't tried malloc-ing a ton of memory, but
this contrived real-world test works.

Swap is at 238 MB.  Lower than I would have expected, but that doesn't
mean I know anything.

Switching between X sessions just took near 10-seconds going, and only
about 5-seconds coming back.  Switching in Staroffice is painful, but
generally the responsiveness feels quite nice.

I like it.  Thanks,

-kb, the Kent who will be staying with 2.4.9-ac15, plus preemption
patch, plus this patch--once he figures out how to close all those
windows.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
@ 2001-09-27  4:27 Thomas Hood
  2001-09-27 12:01 ` Alan Cox
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Hood @ 2001-09-27  4:27 UTC (permalink / raw)
  To: linux-kernel

I know next to nothing about these VM issues, but here's
another data point:

2.4.9-ac15 was painfully sluggish on my ThinkPad 600,
Pentium II, 120 MB RAM system just running galeon
or compiling a kernel.  The disk would begin thrashing
and continue doing so for many minutes.  Swap usage was
reported as zero the whole time.

I applied the "2.4.9-ac15-age+launder" patch and things
improved dramatically.

--

Thomas

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH]  Re: 2.4.9-ac15 painfully sluggish
  2001-09-27  4:27 Thomas Hood
@ 2001-09-27 12:01 ` Alan Cox
  0 siblings, 0 replies; 15+ messages in thread
From: Alan Cox @ 2001-09-27 12:01 UTC (permalink / raw)
  To: Thomas Hood; +Cc: linux-kernel

> or compiling a kernel.  The disk would begin thrashing
> and continue doing so for many minutes.  Swap usage was
> reported as zero the whole time.

Ac15 had a merge error

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2001-09-27 11:56 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-09-25 15:36 2.4.9-ac15 painfully sluggish Pau Aliagas
2001-09-25 16:16 ` Rik van Riel
2001-09-25 16:25   ` Pau Aliagas
2001-09-25 16:37     ` Rik van Riel
2001-09-25 17:04       ` Pau Aliagas
2001-09-25 17:31         ` Rik van Riel
2001-09-25 18:01           ` Pau Aliagas
2001-09-25 19:32     ` [PATCH] " Rik van Riel
2001-09-25 20:42       ` Jasper Spaans
2001-09-25 20:52         ` Rik van Riel
2001-09-26  7:36       ` Pau Aliagas
2001-09-26 13:54       ` [PATCH] " Roberto Orenstein
2001-09-26 18:01       ` [PATCH] " Kent Borg
  -- strict thread matches above, loose matches on Subject: below --
2001-09-27  4:27 Thomas Hood
2001-09-27 12:01 ` Alan Cox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox