public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "J.A. Magallon" <jamagallon@able.es>
To: Steinar Hauan <hauan@cmu.edu>
Cc: linux-kernel@vger.kernel.org
Subject: Re: smp cputime issues (patch request ?)
Date: Thu, 3 Jan 2002 02:27:05 +0100	[thread overview]
Message-ID: <20020103022705.A3163@werewolf.able.es> (raw)
In-Reply-To: <Pine.GSO.4.33L-022.0201011959360.7513-300000@unix13.andrew.cmu.edu>
In-Reply-To: <Pine.GSO.4.33L-022.0201011959360.7513-300000@unix13.andrew.cmu.edu>; from hauan@cmu.edu on Wed, Jan 02, 2002 at 02:00:43 +0100


On 20020102 Steinar Hauan wrote:
>hello,
>
>  we are encountering some weird timing behaviour on our linux cluster.
>
>  specifically: when running 2 copies of selected programs on a
>  dual-cpu system, the cputime reported for each process is up to 25%
>  higher than when the processes are run on their own. however, if running
>  two different jobs on the same machine, both complete with a cputime
>  equal to when run individually. sample timing output attached.
>

Cache pollution problems ? 

As I understand, your job does not use too much memory, does no IO,
just linear algebra (ie, matrix-times-vector or vector-plus-vector
operations). That implies sequential access to matrix rows and vectors.

I will try to guess...

Problem with linux scheduler is that processes are bounced from one CPU
to the other, they are not tied to one, nor try to stay in the one they
start, even if there is no need for the cpu to do any other job.
On an UP box, the cache is useful to speed up your matrix-vector ops.
One process on a 2-way box, just bounces from one cpu to the other,
and both caches are filled with the same data. Two processes on two
cpus, and everytime they 'swap' between cpus they trash the previous
cache for the other job, so when it returs it has no data cached.

Solutions:
- cpu affinity patch: manually tie processes to cpus
- new scheduler: a patch for the scheduler that tries to
  keep processes on the cpu they start was talked about on the list.

I would prefer the second option. I think it is named something like
'multiqueue scheduler', and its 'father' could be (AFAIR) Davide Libezni.
Look for that on the list archives. Problem: I think the patch only
exists for 2.5.

Request: a version for 2.4.17+ ?? (plz)

Disclaimer: of course, all the previous discussion can be crap.

Good luck. I am also interested in this problem.

-- 
J.A. Magallon                           #  Let the source be with you...        
mailto:jamagallon@able.es
Mandrake Linux release 8.2 (Cooker) for i586
Linux werewolf 2.4.18-pre1-beo #3 SMP Thu Dec 27 10:15:27 CET 2001 i686

  parent reply	other threads:[~2002-01-03  1:24 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-01-02  1:00 smp cputime issues Steinar Hauan
2002-01-02  1:31 ` M. Edward Borasky
2002-01-02 13:54   ` Steinar Hauan
2002-01-03  1:27 ` J.A. Magallon [this message]
2002-01-03  2:39   ` smp cputime issues (patch request ?) Davide Libenzi
2002-01-05  5:21   ` Steinar Hauan
2002-01-05  5:51     ` John Alvord

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020103022705.A3163@werewolf.able.es \
    --to=jamagallon@able.es \
    --cc=hauan@cmu.edu \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox