public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [lart] /bin/ps output
@ 2002-10-12  3:36 Dave Hansen
  2002-10-12  3:38 ` David S. Miller
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Dave Hansen @ 2002-10-12  3:36 UTC (permalink / raw)
  To: lkml

Man, this looks ugly.  I'm just waiting for Bill Irwin, or Anton to 
trump me, though.

   PID TTY      STAT   TIME COMMAND
     1 ?        S      0:10 init
     2 ?        SW     0:00 [migration/0]
     3 ?        SWN    0:00 [ksoftirqd/0]
     4 ?        SW     0:00 [migration/1]
     5 ?        SWN    0:00 [ksoftirqd/1]
     6 ?        SW     0:00 [migration/2]
     7 ?        RWN    0:00 [ksoftirqd/2]
     8 ?        SW     0:00 [migration/3]
     9 ?        SWN    0:00 [ksoftirqd/3]
    10 ?        SW     0:00 [migration/4]
    11 ?        SWN    0:00 [ksoftirqd/4]
    12 ?        SW     0:00 [migration/5]
    13 ?        SWN    0:00 [ksoftirqd/5]
    14 ?        SW     0:00 [migration/6]
    15 ?        RWN    0:00 [ksoftirqd/6]
    16 ?        SW     0:00 [migration/7]
    17 ?        SWN    0:00 [ksoftirqd/7]
    18 ?        SW     0:00 [events/0]
    19 ?        SW     0:00 [events/1]
    20 ?        SW     0:00 [events/2]
    21 ?        SW     0:00 [events/3]
    22 ?        SW     0:00 [events/4]
    23 ?        SW     0:00 [events/5]
    24 ?        SW     0:00 [events/6]
    25 ?        SW     0:00 [events/7]
    26 ?        SW     0:00 [kswapd0]
    28 ?        SW     0:00 [pdflush]
    27 ?        SW     0:00 [pdflush]
    29 ?        SW     0:00 [aio/0]
    30 ?        SW     0:00 [aio/1]
    31 ?        SW     0:00 [aio/2]
    32 ?        SW     0:00 [aio/3]
    33 ?        SW     0:00 [aio/4]
    34 ?        SW     0:00 [aio/5]
    35 ?        SW     0:00 [aio/6]
    36 ?        SW     0:00 [aio/7]
    37 ?        SW     0:00 [scsi_eh_0]
    38 ?        SW     0:00 [scsi_eh_1]
    39 ?        SW     0:00 [scsi_eh_2]
    40 ?        SW     0:00 [kseriod]
    41 ?        DW     0:00 [kjournald]
   115 ?        SW     0:00 [kjournald]
   116 ?        DW     0:00 [kjournald]
   117 ?        SW     0:00 [kjournald]

-- 
Dave Hansen
haveblue@us.ibm.com


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:36 [lart] /bin/ps output Dave Hansen
@ 2002-10-12  3:38 ` David S. Miller
  2002-10-12  3:55   ` Rik van Riel
  2002-10-12  3:43 ` William Lee Irwin III
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: David S. Miller @ 2002-10-12  3:38 UTC (permalink / raw)
  To: haveblue; +Cc: linux-kernel

   From: Dave Hansen <haveblue@us.ibm.com>
   Date: Fri, 11 Oct 2002 20:36:22 -0700

   Man, this looks ugly.  I'm just waiting for Bill Irwin, or Anton to 
   trump me, though.
   
We could make them threads of process 0 :-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:36 [lart] /bin/ps output Dave Hansen
  2002-10-12  3:38 ` David S. Miller
@ 2002-10-12  3:43 ` William Lee Irwin III
  2002-10-12  3:51 ` Anton Blanchard
  2002-11-16  9:24 ` William Lee Irwin III
  3 siblings, 0 replies; 19+ messages in thread
From: William Lee Irwin III @ 2002-10-12  3:43 UTC (permalink / raw)
  To: Dave Hansen; +Cc: lkml

On Fri, Oct 11, 2002 at 08:36:22PM -0700, Dave Hansen wrote:
> Man, this looks ugly.  I'm just waiting for Bill Irwin, or Anton to 
> trump me, though.

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct09 ?        00:00:08 init
root         2     1  0 Oct09 ?        00:00:00 [migration_CPU0]
root         3     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU0]
root         4     1  0 Oct09 ?        00:00:00 [migration_CPU1]
root         5     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU1]
root         6     1  0 Oct09 ?        00:00:00 [migration_CPU2]
root         7     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU2]
root         8     1  0 Oct09 ?        00:00:00 [migration_CPU3]
root         9     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU3]
root        10     1  0 Oct09 ?        00:00:00 [migration_CPU4]
root        11     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU4]
root        12     1  0 Oct09 ?        00:00:00 [migration_CPU5]
root        13     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU5]
root        14     1  0 Oct09 ?        00:00:00 [migration_CPU6]
root        15     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU6]
root        16     1  0 Oct09 ?        00:00:00 [migration_CPU7]
root        17     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU7]
root        18     1  0 Oct09 ?        00:00:00 [migration_CPU8]
root        19     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU8]
root        20     1  0 Oct09 ?        00:00:00 [migration_CPU9]
root        21     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU9]
root        22     1  0 Oct09 ?        00:00:00 [migration_CPU10]
root        23     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU10]
root        24     1  0 Oct09 ?        00:00:00 [migration_CPU11]
root        25     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU11]
root        26     1  0 Oct09 ?        00:00:00 [migration_CPU12]
root        27     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU12]
root        28     1  0 Oct09 ?        00:00:00 [migration_CPU13]
root        29     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU13]
root        30     1  0 Oct09 ?        00:00:00 [migration_CPU14]
root        31     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU14]
root        32     1  0 Oct09 ?        00:00:00 [migration_CPU15]
root        33     1  0 Oct09 ?        00:00:00 [ksoftirqd_CPU15]
root        34     1  0 Oct09 ?        00:00:00 [events/0]
root        35     1  0 Oct09 ?        00:00:00 [events/1]
root        36     1  0 Oct09 ?        00:00:00 [events/2]
root        37     1  0 Oct09 ?        00:00:00 [events/3]
root        38     1  0 Oct09 ?        00:00:00 [events/4]
root        39     1  0 Oct09 ?        00:00:00 [events/5]
root        40     1  0 Oct09 ?        00:00:00 [events/6]
root        41     1  0 Oct09 ?        00:00:00 [events/7]
root        42     1  0 Oct09 ?        00:00:00 [events/8]
root        43     1  0 Oct09 ?        00:00:00 [events/9]
root        44     1  0 Oct09 ?        00:00:00 [events/10]
root        45     1  0 Oct09 ?        00:00:00 [events/11]
root        46     1  0 Oct09 ?        00:00:00 [events/12]
root        47     1  0 Oct09 ?        00:00:00 [events/13]
root        48     1  0 Oct09 ?        00:00:00 [events/14]
root        49     1  0 Oct09 ?        00:00:00 [events/15]
root        71     1  0 Oct09 ?        00:00:00 [pdflush]
root        70     1  0 Oct09 ?        00:02:24 [pdflush]
root        69     1  0 Oct09 ?        00:00:00 [kswapd0]
root        68     1  0 Oct09 ?        00:00:00 [kswapd1]
root        67     1  0 Oct09 ?        00:00:00 [kswapd2]
root        66     1  0 Oct09 ?        00:00:00 [kswapd3]
root        72     1  0 Oct09 ?        00:00:00 [aio/0]
root        73     1  0 Oct09 ?        00:00:00 [aio/1]
root        74     1  0 Oct09 ?        00:00:00 [aio/2]
root        75     1  0 Oct09 ?        00:00:00 [aio/3]
root        76     1  0 Oct09 ?        00:00:00 [aio/4]
root        77     1  0 Oct09 ?        00:00:00 [aio/5]
root        78     1  0 Oct09 ?        00:00:00 [aio/6]
root        79     1  0 Oct09 ?        00:00:00 [aio/7]
root        80     1  0 Oct09 ?        00:00:00 [aio/8]
root        81     1  0 Oct09 ?        00:00:00 [aio/9]
root        82     1  0 Oct09 ?        00:00:00 [aio/10]
root        83     1  0 Oct09 ?        00:00:00 [aio/11]
root        84     1  0 Oct09 ?        00:00:00 [aio/12]
root        85     1  0 Oct09 ?        00:00:00 [aio/13]
root        86     1  0 Oct09 ?        00:00:00 [aio/14]
root        87     1  0 Oct09 ?        00:00:00 [aio/15]
root        88     1  0 Oct09 ?        00:00:00 [scsi_eh_0]
root       264     1  0 Oct09 ?        00:00:00 /sbin/syslogd
root       267     1  0 Oct09 ?        00:00:01 /sbin/klogd
root       274     1  0 Oct09 ?        00:00:00 /usr/sbin/inetd
root       279     1  0 Oct09 ?        00:00:00 /usr/sbin/netserver
root       285     1  0 Oct09 ?        00:00:00 /usr/sbin/sshd
root       288     1  0 Oct09 ?        00:00:07 /usr/sbin/ntpd
root       291     1  0 Oct09 tty1     00:00:00 /sbin/getty 38400 tty1
root       292     1  0 Oct09 tty2     00:00:00 /sbin/getty 38400 tty2
root       293     1  0 Oct09 tty3     00:00:00 /sbin/getty 38400 tty3
root       294     1  0 Oct09 tty4     00:00:00 /sbin/getty 38400 tty4
root       295     1  0 Oct09 tty5     00:00:00 /sbin/getty 38400 tty5
root       296     1  0 Oct09 tty6     00:00:00 /sbin/getty 38400 tty6
root       297     1  0 Oct09 ttyS0    00:00:00 -bash
root       303     1  0 Oct09 ?        00:00:00 sshd -p 1976
root      9582   285  0 Oct10 ?        00:00:00 /usr/sbin/sshd
wli       9584  9582  0 Oct10 ?        00:00:01 /usr/sbin/sshd
wli       9585  9584  0 Oct10 pts/0    00:00:00 -zsh
root      9589   285  0 Oct10 ?        00:00:00 /usr/sbin/sshd
wli       9591  9589  0 Oct10 ?        00:00:00 /usr/sbin/sshd
wli       9592  9591  0 Oct10 pts/1    00:00:00 -zsh
root      5901   285  0 20:44 ?        00:00:00 /usr/sbin/sshd
wli       5903  5901  0 20:44 ?        00:00:00 /usr/sbin/sshd
wli       5904  5903  1 20:44 pts/2    00:00:00 -zsh
wli       5907  5904  0 20:44 pts/2    00:00:00 ps -fade
wli       5908  5904  0 20:44 pts/2    00:00:00 less

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:36 [lart] /bin/ps output Dave Hansen
  2002-10-12  3:38 ` David S. Miller
  2002-10-12  3:43 ` William Lee Irwin III
@ 2002-10-12  3:51 ` Anton Blanchard
  2002-10-12  3:59   ` William Lee Irwin III
  2002-11-16  9:24 ` William Lee Irwin III
  3 siblings, 1 reply; 19+ messages in thread
From: Anton Blanchard @ 2002-10-12  3:51 UTC (permalink / raw)
  To: Dave Hansen; +Cc: lkml


> Man, this looks ugly.  I'm just waiting for Bill Irwin, or Anton to 
> trump me, though.

Id like to oblige but I hear the list has a 100kB limit :)

Anton

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:38 ` David S. Miller
@ 2002-10-12  3:55   ` Rik van Riel
  2002-10-12 11:49     ` Alan Cox
  0 siblings, 1 reply; 19+ messages in thread
From: Rik van Riel @ 2002-10-12  3:55 UTC (permalink / raw)
  To: David S. Miller; +Cc: haveblue, linux-kernel

On Fri, 11 Oct 2002, David S. Miller wrote:

	[ 8 gazillion kernel threads ]

> We could make them threads of process 0 :-)

That was my first thought too, but on second thought I think we've
got an excessive amount of kernel threads and should do something
about that...

regards,

Rik
-- 
Bravely reimplemented by the knights who say "NIH".
http://www.surriel.com/		http://distro.conectiva.com/
Current spamtrap:  <a href=mailto:"october@surriel.com">october@surriel.com</a>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:51 ` Anton Blanchard
@ 2002-10-12  3:59   ` William Lee Irwin III
  2002-10-12  4:09     ` Anton Blanchard
  0 siblings, 1 reply; 19+ messages in thread
From: William Lee Irwin III @ 2002-10-12  3:59 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: Dave Hansen, lkml

On Sat, Oct 12, 2002 at 01:51:41PM +1000, Anton Blanchard wrote:
> Id like to oblige but I hear the list has a 100kB limit :)
> Anton

Bah! I'm at a competitive disadvantage because I've got a lesser
BITS_PER_LONG. No matter, NR_CPUS > BITS_PER_LONG shall be conquered
and the explosion of kernel threads will be quite visible (though
unfortunately probably post-freeze).



Cheers,
Bill

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:59   ` William Lee Irwin III
@ 2002-10-12  4:09     ` Anton Blanchard
  2002-10-12  6:53       ` David S. Miller
  2002-10-13 19:25       ` Andrew Morton
  0 siblings, 2 replies; 19+ messages in thread
From: Anton Blanchard @ 2002-10-12  4:09 UTC (permalink / raw)
  To: William Lee Irwin III, Dave Hansen, lkml


> Bah! I'm at a competitive disadvantage because I've got a lesser
> BITS_PER_LONG. No matter, NR_CPUS > BITS_PER_LONG shall be conquered
> and the explosion of kernel threads will be quite visible (though
> unfortunately probably post-freeze).

Speaking of which, the recent CONFIG_NR_CPUS addition shows just how
bloated all our [NR_CPU] structures are. We need to get serious about
using the per cpu data stuff. Going from 32 to 64 was over 500kB on my
ppc64 build.

Anton

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  4:09     ` Anton Blanchard
@ 2002-10-12  6:53       ` David S. Miller
  2002-10-12 20:15         ` Richard Henderson
  2002-10-13 19:25       ` Andrew Morton
  1 sibling, 1 reply; 19+ messages in thread
From: David S. Miller @ 2002-10-12  6:53 UTC (permalink / raw)
  To: anton; +Cc: wli, haveblue, linux-kernel

   From: Anton Blanchard <anton@samba.org>
   Date: Sat, 12 Oct 2002 14:09:59 +1000

   Speaking of which, the recent CONFIG_NR_CPUS addition shows just how
   bloated all our [NR_CPU] structures are. We need to get serious about
   using the per cpu data stuff. Going from 32 to 64 was over 500kB on my
   ppc64 build.

In fact, thinking about this some more, we should make the ".per_cpu"
bits emit a table entry instead of some dummy object which takes up
space.  The table entry would be in the special .per_cpu
section still but be just a size value.

We should do this on both SMP and non-SMP and it will shrink the
kernel image size in both cases.

I don't have the time to implement this so I'll shut up now :-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:55   ` Rik van Riel
@ 2002-10-12 11:49     ` Alan Cox
  0 siblings, 0 replies; 19+ messages in thread
From: Alan Cox @ 2002-10-12 11:49 UTC (permalink / raw)
  To: Rik van Riel; +Cc: David S. Miller, haveblue, Linux Kernel Mailing List

On Sat, 2002-10-12 at 04:55, Rik van Riel wrote:
> On Fri, 11 Oct 2002, David S. Miller wrote:
> 
> 	[ 8 gazillion kernel threads ]
> 
> > We could make them threads of process 0 :-)
> 
> That was my first thought too, but on second thought I think we've
> got an excessive amount of kernel threads and should do something
> about that...

Migration workqueues ?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  6:53       ` David S. Miller
@ 2002-10-12 20:15         ` Richard Henderson
  2002-10-13  6:27           ` David S. Miller
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2002-10-12 20:15 UTC (permalink / raw)
  To: David S. Miller; +Cc: anton, wli, haveblue, linux-kernel

On Fri, Oct 11, 2002 at 11:53:29PM -0700, David S. Miller wrote:
> In fact, thinking about this some more, we should make the ".per_cpu"
> bits emit a table entry instead of some dummy object which takes up
> space.  The table entry would be in the special .per_cpu
> section still but be just a size value.

That's more complicated.  Using the linker to help out with
layout is definitely helpful.  If you want to omit the per-cpu
area from the kernel image, then arrange for it to be .bss.


r~

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12 20:15         ` Richard Henderson
@ 2002-10-13  6:27           ` David S. Miller
  2002-10-13  6:42             ` Anton Blanchard
  2002-10-13 18:25             ` Richard Henderson
  0 siblings, 2 replies; 19+ messages in thread
From: David S. Miller @ 2002-10-13  6:27 UTC (permalink / raw)
  To: rth; +Cc: anton, wli, haveblue, linux-kernel

   From: Richard Henderson <rth@twiddle.net>
   Date: Sat, 12 Oct 2002 13:15:01 -0700

   On Fri, Oct 11, 2002 at 11:53:29PM -0700, David S. Miller wrote:
   > In fact, thinking about this some more, we should make the ".per_cpu"
   > bits emit a table entry instead of some dummy object which takes up
   > space.  The table entry would be in the special .per_cpu
   > section still but be just a size value.
   
   That's more complicated.

Hmm, we put arbitrary tables of information into seperate elf sections
already.  Consider the exception fixup mechanism for example.  That's
the kind of thing I was thinking about.

Oh I see, you're saying that then getting at the things symbolically
will be painful.  Yes it needs more thought.

   If you want to omit the per-cpu area from the kernel image, then
   arrange for it to be .bss.
   
Good idea, well on SMP it can be marked throw-away (ie. __init_data).

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-13  6:27           ` David S. Miller
@ 2002-10-13  6:42             ` Anton Blanchard
  2002-10-13 18:25             ` Richard Henderson
  1 sibling, 0 replies; 19+ messages in thread
From: Anton Blanchard @ 2002-10-13  6:42 UTC (permalink / raw)
  To: David S. Miller; +Cc: rth, wli, haveblue, linux-kernel


> Good idea, well on SMP it can be marked throw-away (ie. __init_data).

We could also only create per cpu data areas when cpu_possible() is
true, instead of NR_CPUS worth. That might be a little dangerous however.

Anton

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-13  6:27           ` David S. Miller
  2002-10-13  6:42             ` Anton Blanchard
@ 2002-10-13 18:25             ` Richard Henderson
  1 sibling, 0 replies; 19+ messages in thread
From: Richard Henderson @ 2002-10-13 18:25 UTC (permalink / raw)
  To: David S. Miller; +Cc: anton, wli, haveblue, linux-kernel

On Sat, Oct 12, 2002 at 11:27:44PM -0700, David S. Miller wrote:
> Good idea, well on SMP it can be marked throw-away (ie. __init_data).

Or you could use it for cpu0.


r~

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  4:09     ` Anton Blanchard
  2002-10-12  6:53       ` David S. Miller
@ 2002-10-13 19:25       ` Andrew Morton
  2002-10-19 12:22         ` Anton Blanchard
  1 sibling, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2002-10-13 19:25 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: William Lee Irwin III, Dave Hansen, lkml

Anton Blanchard wrote:
> 
> > Bah! I'm at a competitive disadvantage because I've got a lesser
> > BITS_PER_LONG. No matter, NR_CPUS > BITS_PER_LONG shall be conquered
> > and the explosion of kernel threads will be quite visible (though
> > unfortunately probably post-freeze).
> 
> Speaking of which, the recent CONFIG_NR_CPUS addition shows just how
> bloated all our [NR_CPU] structures are. We need to get serious about
> using the per cpu data stuff. Going from 32 to 64 was over 500kB on my
> ppc64 build.
> 

Half of which is in timer.c.

mnm:/usr/src/25> size kernel/timer.o
   text    data     bss     dec     hex filename
   4960     100  167648  172708   2a2a4 kernel/timer.o

That's with NR_CPUS=32.  Show me yours.

Using the percpu stuff will not significantly reduce this.  Some
new data structure might be needed.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-13 19:25       ` Andrew Morton
@ 2002-10-19 12:22         ` Anton Blanchard
  0 siblings, 0 replies; 19+ messages in thread
From: Anton Blanchard @ 2002-10-19 12:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: William Lee Irwin III, Dave Hansen, lkml


> Half of which is in timer.c.
> 
> mnm:/usr/src/25> size kernel/timer.o
>    text    data     bss     dec     hex filename
>    4960     100  167648  172708   2a2a4 kernel/timer.o
> 
> That's with NR_CPUS=32.  Show me yours.

Its not like me to miss show and tell.

CONFIG_NR_CPUS=32:
text	   data	    bss	    dec	    hex	filename
8488	 267544	  71392	 347424	  54d20	kernel/timer.o

CONFIG_NR_CPUS=64:
text	   data	    bss	    dec	    hex	filename
8488	 533784	 137568	 679840	  a5fa0	kernel/timer.o

Ouch.

Anton

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-10-12  3:36 [lart] /bin/ps output Dave Hansen
                   ` (2 preceding siblings ...)
  2002-10-12  3:51 ` Anton Blanchard
@ 2002-11-16  9:24 ` William Lee Irwin III
  2002-11-17  0:11   ` Alan Cox
  3 siblings, 1 reply; 19+ messages in thread
From: William Lee Irwin III @ 2002-11-16  9:24 UTC (permalink / raw)
  To: Dave Hansen; +Cc: lkml

On Fri, Oct 11, 2002 at 08:36:22PM -0700, Dave Hansen wrote:
> Man, this looks ugly.  I'm just waiting for Bill Irwin, or Anton to 
> trump me, though.

If I may rehash, since it is O(cpus), hence 1MB of ZONE_NORMAL stacks...

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  4 18:56 ?        00:00:09 init
root         2     1  0 18:56 ?        00:00:00 [migration/0]
root         3     1  0 18:56 ?        00:00:00 [ksoftirqd/0]
root         4     1  0 18:56 ?        00:00:00 [migration/1]
root         5     1  0 18:56 ?        00:00:00 [ksoftirqd/1]
root         6     1  0 18:56 ?        00:00:00 [migration/2]
root         7     1  0 18:56 ?        00:00:00 [ksoftirqd/2]
root         8     1  0 18:56 ?        00:00:00 [migration/3]
root         9     1  0 18:56 ?        00:00:00 [ksoftirqd/3]
root        10     1  0 18:56 ?        00:00:00 [migration/4]
root        11     1  0 18:56 ?        00:00:00 [ksoftirqd/4]
root        12     1  0 18:56 ?        00:00:00 [migration/5]
root        13     1  0 18:56 ?        00:00:00 [ksoftirqd/5]
root        14     1  0 18:56 ?        00:00:00 [migration/6]
root        15     1  0 18:56 ?        00:00:00 [ksoftirqd/6]
root        16     1  0 18:56 ?        00:00:00 [migration/7]
root        17     1  0 18:56 ?        00:00:00 [ksoftirqd/7]
root        18     1  0 18:56 ?        00:00:00 [migration/8]
root        19     1  0 18:56 ?        00:00:00 [ksoftirqd/8]
root        20     1  0 18:56 ?        00:00:00 [migration/9]
root        21     1  0 18:56 ?        00:00:00 [ksoftirqd/9]
root        22     1  0 18:56 ?        00:00:00 [migration/10]
root        23     1  0 18:56 ?        00:00:00 [ksoftirqd/10]
root        24     1  0 18:56 ?        00:00:00 [migration/11]
root        25     1  0 18:56 ?        00:00:00 [ksoftirqd/11]
root        26     1  0 18:56 ?        00:00:00 [migration/12]
root        27     1  0 18:56 ?        00:00:00 [ksoftirqd/12]
root        28     1  0 18:56 ?        00:00:00 [migration/13]
root        29     1  0 18:56 ?        00:00:00 [ksoftirqd/13]
root        30     1  0 18:56 ?        00:00:00 [migration/14]
root        31     1  0 18:56 ?        00:00:00 [ksoftirqd/14]
root        32     1  0 18:56 ?        00:00:00 [migration/15]
root        33     1  0 18:56 ?        00:00:00 [ksoftirqd/15]
root        34     1  0 18:56 ?        00:00:00 [migration/16]
root        35     1  0 18:56 ?        00:00:00 [ksoftirqd/16]
root        36     1  0 18:56 ?        00:00:00 [migration/17]
root        37     1  0 18:56 ?        00:00:00 [ksoftirqd/17]
root        38     1  0 18:56 ?        00:00:00 [migration/18]
root        39     1  0 18:56 ?        00:00:00 [ksoftirqd/18]
root        40     1  0 18:56 ?        00:00:00 [migration/19]
root        41     1  0 18:56 ?        00:00:00 [ksoftirqd/19]
root        42     1  0 18:56 ?        00:00:00 [migration/20]
root        43     1  0 18:56 ?        00:00:00 [ksoftirqd/20]
root        44     1  0 18:56 ?        00:00:00 [migration/21]
root        45     1  0 18:56 ?        00:00:00 [ksoftirqd/21]
root        46     1  0 18:56 ?        00:00:00 [migration/22]
root        47     1  0 18:56 ?        00:00:00 [ksoftirqd/22]
root        48     1  0 18:56 ?        00:00:00 [migration/23]
root        49     1  0 18:56 ?        00:00:00 [ksoftirqd/23]
root        50     1  0 18:56 ?        00:00:00 [migration/24]
root        51     1  0 18:56 ?        00:00:00 [ksoftirqd/24]
root        52     1  0 18:56 ?        00:00:00 [migration/25]
root        53     1  0 18:56 ?        00:00:00 [ksoftirqd/25]
root        54     1  0 18:56 ?        00:00:00 [migration/26]
root        55     1  0 18:56 ?        00:00:00 [ksoftirqd/26]
root        56     1  0 18:56 ?        00:00:00 [migration/27]
root        57     1  0 18:56 ?        00:00:00 [ksoftirqd/27]
root        58     1  0 18:56 ?        00:00:00 [migration/28]
root        59     1  0 18:56 ?        00:00:00 [ksoftirqd/28]
root        60     1  0 18:56 ?        00:00:00 [migration/29]
root        61     1  0 18:56 ?        00:00:00 [ksoftirqd/29]
root        62     1  0 18:56 ?        00:00:00 [migration/30]
root        63     1  0 18:56 ?        00:00:00 [ksoftirqd/30]
root        64     1  0 18:56 ?        00:00:00 [migration/31]
root        65     1  0 18:56 ?        00:00:00 [ksoftirqd/31]
root        66     1  0 18:56 ?        00:00:00 [events/0]
root        67     1  0 18:56 ?        00:00:00 [events/1]
root        68     1  0 18:56 ?        00:00:00 [events/2]
root        69     1  0 18:56 ?        00:00:00 [events/3]
root        70     1  0 18:56 ?        00:00:00 [events/4]
root        71     1  0 18:56 ?        00:00:00 [events/5]
root        72     1  0 18:56 ?        00:00:00 [events/6]
root        73     1  0 18:56 ?        00:00:00 [events/7]
root        74     1  0 18:56 ?        00:00:00 [events/8]
root        75     1  0 18:56 ?        00:00:00 [events/9]
root        76     1  0 18:56 ?        00:00:00 [events/10]
root        77     1  0 18:56 ?        00:00:00 [events/11]
root        78     1  0 18:56 ?        00:00:00 [events/12]
root        79     1  0 18:56 ?        00:00:00 [events/13]
root        80     1  0 18:56 ?        00:00:00 [events/14]
root        81     1  0 18:56 ?        00:00:00 [events/15]
root        82     1  0 18:56 ?        00:00:00 [events/16]
root        83     1  0 18:56 ?        00:00:00 [events/17]
root        84     1  0 18:56 ?        00:00:00 [events/18]
root        85     1  0 18:56 ?        00:00:00 [events/19]
root        86     1  0 18:56 ?        00:00:00 [events/20]
root        87     1  0 18:56 ?        00:00:00 [events/21]
root        88     1  0 18:56 ?        00:00:00 [events/22]
root        89     1  0 18:56 ?        00:00:00 [events/23]
root        90     1  0 18:56 ?        00:00:00 [events/24]
root        91     1  0 18:56 ?        00:00:00 [events/25]
root        92     1  0 18:56 ?        00:00:00 [events/26]
root        93     1  0 18:56 ?        00:00:00 [events/27]
root        94     1  0 18:56 ?        00:00:00 [events/28]
root        95     1  0 18:56 ?        00:00:00 [events/29]
root        96     1  0 18:56 ?        00:00:00 [events/30]
root        97     1  0 18:56 ?        00:00:00 [events/31]
root       105     1  0 18:56 ?        00:00:00 [kswapd0]
root       104     1  0 18:56 ?        00:00:00 [kswapd1]
root       103     1  0 18:56 ?        00:00:00 [kswapd2]
root       102     1  0 18:56 ?        00:00:00 [kswapd3]
root       100     1  0 18:56 ?        00:00:00 [kswapd5]
root       101     1  0 18:56 ?        00:00:00 [kswapd4]
root        99     1  0 18:56 ?        00:00:00 [kswapd6]
root        98     1  0 18:56 ?        00:00:00 [kswapd7]
root       106     1  1 18:56 ?        00:00:03 [pdflush]
root       107     1  1 18:56 ?        00:00:03 [pdflush]
root       108     1  0 18:56 ?        00:00:00 [aio/0]
root       109     1  0 18:56 ?        00:00:00 [aio/1]
root       110     1  0 18:56 ?        00:00:00 [aio/2]
root       111     1  0 18:56 ?        00:00:00 [aio/3]
root       112     1  0 18:56 ?        00:00:00 [aio/4]
root       113     1  0 18:56 ?        00:00:00 [aio/5]
root       114     1  0 18:56 ?        00:00:00 [aio/6]
root       115     1  0 18:56 ?        00:00:00 [aio/7]
root       116     1  0 18:56 ?        00:00:00 [aio/8]
root       117     1  0 18:56 ?        00:00:00 [aio/9]
root       118     1  0 18:56 ?        00:00:00 [aio/10]
root       119     1  0 18:56 ?        00:00:00 [aio/11]
root       120     1  0 18:56 ?        00:00:00 [aio/12]
root       121     1  0 18:56 ?        00:00:00 [aio/13]
root       122     1  0 18:56 ?        00:00:00 [aio/14]
root       123     1  0 18:56 ?        00:00:00 [aio/15]
root       124     1  0 18:56 ?        00:00:00 [aio/16]
root       125     1  0 18:56 ?        00:00:00 [aio/17]
root       126     1  0 18:56 ?        00:00:00 [aio/18]
root       127     1  0 18:56 ?        00:00:00 [aio/19]
root       128     1  0 18:56 ?        00:00:00 [aio/20]
root       129     1  0 18:56 ?        00:00:00 [aio/21]
root       130     1  0 18:56 ?        00:00:00 [aio/22]
root       131     1  0 18:56 ?        00:00:00 [aio/23]
root       132     1  0 18:56 ?        00:00:00 [aio/24]
root       133     1  0 18:56 ?        00:00:00 [aio/25]
root       134     1  0 18:56 ?        00:00:00 [aio/26]
root       135     1  0 18:56 ?        00:00:00 [aio/27]
root       136     1  0 18:56 ?        00:00:00 [aio/28]
root       137     1  0 18:56 ?        00:00:00 [aio/29]
root       138     1  0 18:56 ?        00:00:00 [aio/30]
root       139     1  0 18:56 ?        00:00:00 [aio/31]
root       140     1  0 18:56 ?        00:00:00 [scsi_eh_0]
root       320     1  0 18:56 ?        00:00:00 /sbin/syslogd
root       323     1  0 18:56 ?        00:00:00 /sbin/klogd
root       330     1  0 18:56 ?        00:00:00 /usr/sbin/inetd
root       334     1  0 18:56 ?        00:00:00 /usr/sbin/netserver
root       340     1  0 18:56 ?        00:00:00 /usr/sbin/sshd
root       343     1  0 18:56 ?        00:00:00 /usr/sbin/ntpd
root       346     1  0 18:56 tty1     00:00:00 /sbin/getty 38400 tty1
root       347     1  0 18:56 tty2     00:00:00 /sbin/getty 38400 tty2
root       348     1  0 18:56 tty3     00:00:00 /sbin/getty 38400 tty3
root       349     1  0 18:56 tty4     00:00:00 /sbin/getty 38400 tty4
root       350     1  0 18:56 tty5     00:00:00 /sbin/getty 38400 tty5
root       351     1  0 18:56 tty6     00:00:00 /sbin/getty 38400 tty6
root       352     1  0 18:56 ttyS0    00:00:00 -bash
root       354   340  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        356   354  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        357   356  0 18:57 pts/0    00:00:00 -zsh
root       360   340  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        362   360  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        363   362  0 18:57 pts/1    00:00:00 -zsh
root       366   340  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        368   366  0 18:57 ?        00:00:00 /usr/sbin/sshd
wli        369   368  0 18:57 pts/2    00:00:00 -zsh
wli       4435   363  0 18:59 pts/1    00:00:00 ps -fade
wli       4436   363  0 18:59 pts/1    00:00:00 less


Hmm, is there any hope left for state machines?


Bill

P.S.:	I'd also love to see make -j64 bzImage take 26s like on 16x
	instead of 48s as it appears to be doing on 32x. Doubling
	num_cpus_online() shouldn't double kernel kernel compile time.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-11-17  0:11   ` Alan Cox
@ 2002-11-16 23:52     ` Chris Wedgwood
  2002-11-17  0:11     ` William Lee Irwin III
  1 sibling, 0 replies; 19+ messages in thread
From: Chris Wedgwood @ 2002-11-16 23:52 UTC (permalink / raw)
  To: Alan Cox; +Cc: William Lee Irwin III, Dave Hansen, lkml

On Sun, Nov 17, 2002 at 12:11:35AM +0000, Alan Cox wrote:

> Bill - so what happens if you trim down the aio, event and ksoftirqd
> threads to a sane size (you might also want to do something about
> the fact 2.5 still runs ksoftirq too easily). Intuitively I'd go for
> a square root of the number of processors + 1 sort of function but
> what do the benchmarks say ?

IMO having various threads per-CPU is getting silly for (say) 4+
CPUs.  Even for two CPUs it means quite a good number of kernel
threads.

Does anyone really know for certain that this is necessary versus
having few per-CPU threads calling into state-machine functions?



  --cw

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-11-17  0:11   ` Alan Cox
  2002-11-16 23:52     ` Chris Wedgwood
@ 2002-11-17  0:11     ` William Lee Irwin III
  1 sibling, 0 replies; 19+ messages in thread
From: William Lee Irwin III @ 2002-11-17  0:11 UTC (permalink / raw)
  To: Alan Cox; +Cc: Dave Hansen, lkml

On Sun, Nov 17, 2002 at 12:11:35AM +0000, Alan Cox wrote:
> Bill - so what happens if you trim down the aio, event and ksoftirqd
> threads to a sane size (you might also want to do something about the
> fact 2.5 still runs ksoftirq too easily). Intuitively I'd go for a
> square root of the number of processors + 1 sort of function but what do
> the benchmarks say ?


Both reorganizing the per-cpu thread pools as state machines and
inserting new locking look like work-intensive projects...

It's not become explosively bad yet (1MB of overhead is eyebrow-raising
but not particularly damaging) so there's no rush to trim this down,
but I'm at least thinking about doing this later. One of the major
obstacles for the state machine approach is that the migration threads
run at RT priority while the rest do not, and of course the greater
than per-cpu granularity approach suffers from additional locking.


Bill

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [lart] /bin/ps output
  2002-11-16  9:24 ` William Lee Irwin III
@ 2002-11-17  0:11   ` Alan Cox
  2002-11-16 23:52     ` Chris Wedgwood
  2002-11-17  0:11     ` William Lee Irwin III
  0 siblings, 2 replies; 19+ messages in thread
From: Alan Cox @ 2002-11-17  0:11 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: Dave Hansen, lkml

Bill - so what happens if you trim down the aio, event and ksoftirqd
threads to a sane size (you might also want to do something about the
fact 2.5 still runs ksoftirq too easily). Intuitively I'd go for a
square root of the number of processors + 1 sort of function but what do
the benchmarks say ?



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2002-11-17  0:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-12  3:36 [lart] /bin/ps output Dave Hansen
2002-10-12  3:38 ` David S. Miller
2002-10-12  3:55   ` Rik van Riel
2002-10-12 11:49     ` Alan Cox
2002-10-12  3:43 ` William Lee Irwin III
2002-10-12  3:51 ` Anton Blanchard
2002-10-12  3:59   ` William Lee Irwin III
2002-10-12  4:09     ` Anton Blanchard
2002-10-12  6:53       ` David S. Miller
2002-10-12 20:15         ` Richard Henderson
2002-10-13  6:27           ` David S. Miller
2002-10-13  6:42             ` Anton Blanchard
2002-10-13 18:25             ` Richard Henderson
2002-10-13 19:25       ` Andrew Morton
2002-10-19 12:22         ` Anton Blanchard
2002-11-16  9:24 ` William Lee Irwin III
2002-11-17  0:11   ` Alan Cox
2002-11-16 23:52     ` Chris Wedgwood
2002-11-17  0:11     ` William Lee Irwin III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox