public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Vincent Sweeney" <v.sweeney@barrysworld.com>
To: "Alan Cox" <alan@lxorguk.ukuu.org.uk>
Cc: <linux-kernel@vger.kernel.org>
Subject: Re: PROBLEM: high system usage / poor SMP network performance
Date: Mon, 28 Jan 2002 19:34:35 -0000	[thread overview]
Message-ID: <002801c1a832$d38933e0$0201010a@frodo> (raw)
In-Reply-To: <E16UyCO-0002zE-00@the-village.bc.nu>

----- Original Message -----
From: "Alan Cox" <alan@lxorguk.ukuu.org.uk>
To: "Vincent Sweeney" <v.sweeney@barrysworld.com>
Cc: <linux-kernel@vger.kernel.org>
Sent: Sunday, January 27, 2002 10:54 PM
Subject: Re: PROBLEM: high system usage / poor SMP network performance


> >     CPU0 states: 27.2% user, 62.4% system,  0.0% nice,  9.2% idle
> >     CPU1 states: 28.4% user, 62.3% system,  0.0% nice,  8.1% idle
>
> The important bit here is     ^^^^^^^^ that one. Something is causing
> horrendous lock contention it appears. Is the e100 driver optimised for
SMP
> yet ? Do you get better numbers if you use the eepro100 driver ?


I've switched a server over to the default eepro100 driver as supplied in
2.4.17 (compiled as a module). This is tonights snapshot with about 10%
higher user count than above (2200 connections per ircd)

  7:25pm  up  5:44,  2 users,  load average: 0.85, 1.01, 1.09
38 processes: 33 sleeping, 5 running, 0 zombie, 0 stopped
CPU0 states: 27.3% user, 69.3% system,  0.0% nice,  2.2% idle
CPU1 states: 26.1% user, 71.2% system,  0.0% nice,  2.0% idle
Mem:   385096K av,  232960K used,  152136K free,       0K shrd,    4724K
buff
Swap:  379416K av,       0K used,  379416K free                   21780K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
  659 ircd      15   0 74976  73M   660 R    96.7 19.4 263:21 ircd
  666 ircd      14   0 75004  73M   656 R    95.5 19.4 253:10 ircd

So as you can see the numbers are almost the same, though they were worse at
lower users than the e100 driver (~45% system per cpu at 1000 users per ircd
with eepro100,  ~30% with e100).

I will try the profiling tomorrow with the eepro100 driver compiled into the
kernel, I was unable to do the same for the Intel e100 driver today as I
discovered that the Intel driver can currenty only be compiled as a module.

Vince.



  parent reply	other threads:[~2002-01-28 19:35 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-01-27 22:23 PROBLEM: high system usage / poor SMP network performance Vincent Sweeney
2002-01-27 22:42 ` Andrew Morton
2002-01-27 22:54 ` Alan Cox
2002-01-27 22:52   ` arjan
2002-01-27 23:08   ` Vincent Sweeney
2002-01-28 19:34   ` Vincent Sweeney [this message]
2002-01-28 19:40     ` Rik van Riel
2002-01-29 16:32       ` Vincent Sweeney
  -- strict thread matches above, loose matches on Subject: below --
2002-01-29 18:00 Dan Kegel
2002-01-29 20:09 ` Vincent Sweeney
2002-01-31  5:24   ` Dan Kegel
     [not found]     ` <001d01c1aa8e$2e067e60$0201010a@frodo>
2002-02-03  8:03       ` Dan Kegel
2002-02-03  8:36         ` Andrew Morton
2002-02-12 18:48           ` Vincent Sweeney
2002-02-03 19:22         ` Kev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='002801c1a832$d38933e0$0201010a@frodo' \
    --to=v.sweeney@barrysworld.com \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox