From: Hubertus Franke <frankeh@watson.ibm.com>
To: Davide Libenzi <davidel@xmailserver.org>
Cc: Mike Kravetz <kravetz@us.ibm.com>,
lkml <linux-kernel@vger.kernel.org>,
lse-tech@lists.sourceforge.net
Subject: Re: [Lse-tech] Re: [PATCH][RFC] Proposal For A More Scalable Scheduler ...
Date: Fri, 2 Nov 2001 07:20:36 -0500 [thread overview]
Message-ID: <20011102072036.D17792@watson.ibm.com> (raw)
In-Reply-To: <20011031151243.E1105@w-mikek2.des.beaverton.ibm.com> <Pine.LNX.4.40.0110311544330.1484-100000@blue1.dev.mcafeelabs.com>
In-Reply-To: <Pine.LNX.4.40.0110311544330.1484-100000@blue1.dev.mcafeelabs.com>; from Davide Libenzi on Wed, Oct 31, 2001 at 03:53:39PM -0800
* Davide Libenzi <davidel@xmailserver.org> [20011031 18;53]:"
> On Wed, 31 Oct 2001, Mike Kravetz wrote:
>
> > I'm going to try and merge your 'cache warmth' replacement for
> > PROC_CHANGE_PENALTY into the LSE MQ scheduler, as well as enable
> > the code to prevent task stealing during IPI delivery. This
> > should still be significantly different than your design because
> > MQ will still attempt to make global decisions. Results should
> > be interesting.
>
> I'm currently evaluating different weights for that.
> Right now I'm using :
>
> if (p->cpu_jtime > jiffies)
> weight += p->cpu_jtime - jiffies;
>
> that might be too much.
> Solutions :
>
> 1)
> if (p->cpu_jtime > jiffies)
> weight += (p->cpu_jtime - jiffies) >> 1;
>
> 2)
> int wtable[];
>
> if (p->cpu_jtime > jiffies)
> weight += wtable[p->cpu_jtime - jiffies];
>
> Speed will like 1).
> Other optimization is jiffies that is volatile and forces gcc to always
> reload it.
>
> static inline int goodness(struct task_struct * p, struct mm_struct
> *this_mm, unsigned long jiff)
>
> might be better, with jiffies taken out of the goodness loop.
> Mike I suggest you to use the LatSched patch to 1) know how really is
> performing the scheduler 2) understand if certain test gives certain
> results due wierd distributions.
>
One more. Throughout our MQ evaluation, it was also true that
the overall performance particularly for large thread counts was
very sensitive to the goodness function, that why a na_goodness_local
was introduced.
-- Hubertus
next prev parent reply other threads:[~2001-11-02 14:21 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20011031151243.E1105@w-mikek2.des.beaverton.ibm.com>
2001-10-31 23:53 ` [Lse-tech] Re: [PATCH][RFC] Proposal For A More Scalable Scheduler Davide Libenzi
2001-11-02 12:20 ` Hubertus Franke [this message]
2001-11-02 16:58 ` Mike Kravetz
2001-11-03 22:10 ` Davide Libenzi
2001-10-30 16:29 Hubertus Franke
2001-10-30 18:50 ` Davide Libenzi
2001-10-30 16:52 ` Hubertus Franke
2001-10-30 19:08 ` [Lse-tech] " Mike Kravetz
-- strict thread matches above, loose matches on Subject: below --
2001-10-30 14:28 Hubertus Franke
2001-10-30 17:19 ` Davide Libenzi
2001-10-31 0:11 ` [Lse-tech] " Mike Kravetz
2001-10-31 1:06 ` Davide Libenzi
2001-10-31 5:29 ` Mike Kravetz
2001-10-31 4:45 ` Davide Libenzi
2001-10-31 5:50 ` Mike Kravetz
2001-10-31 17:07 ` Mike Kravetz
2001-10-31 17:59 ` Davide Libenzi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20011102072036.D17792@watson.ibm.com \
--to=frankeh@watson.ibm.com \
--cc=davidel@xmailserver.org \
--cc=kravetz@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox