public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Artem S. Tashkinov" <t.artem@lycos.com>
To: hmh@hmh.eng.br
Cc: linux-kernel@vger.kernel.org
Subject: Re: Re: HT (Hyper Threading) aware process scheduling doesn't work as it should
Date: Sun, 30 Oct 2011 21:51:17 +0000 (GMT)	[thread overview]
Message-ID: <815860869.50724.1320011477430.JavaMail.mail@webmail17> (raw)
In-Reply-To: 20111030212644.GA7106@khazad-dum.debian.net

> On Oct 31, 2011, Henrique de Moraes Holschuh wrote: 
>
> On Sun, 30 Oct 2011, Artem S. Tashkinov wrote:
> > I've found out that even on Linux 3.0.8 the process scheduler doesn't correctly distributes
> > the load amongst virtual CPUs. E.g. on a 4-core system (8 total virtual CPUs) the process
> > scheduler often run some instances of four different tasks on the same physical CPU.
> 
> Please check how your sched_mc_power_savings and sched_smt_power_savings
> tunables.   Here's the doc from lesswats.org:
> 
> [cut]
> 
> Please make sure both are set to 0.  If they were not 0 at the time you
> ran your tests, please retest and report back.
> 
> You also want to make sure you _do_ have the SMT scheduler compiled in
> whatever kernel you're using, just in case.
> 
> It is certainly possible that there is a bug in the scheduler, but it is
> best to make sure it is not something else, first.
> 
> You may also want to refer to: http://oss.intel.com/pdfs/mclinux.pdf and
> to the irqbalance and hwloc[1] utilities, since you're apparently
> interested in SMP/SMT/NUMA scheduler performance.

That's 0 & 0 for me.

And people running standard desktop Linux distributions (like Arch Linux and Ubuntu
11.10) report that this issue also applies to them and by default (in the mentioned
distros) both these variables are set to 0 (that is unchanged).

So, there's nothing to retest.

I have another major pet peeve concerning the Linux process scheduler but I
want to start a new thread on that topic.

Best wishes,

Artem



  reply	other threads:[~2011-10-30 21:51 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-10-30 19:57 HT (Hyper Threading) aware process scheduling doesn't work as it should Artem S. Tashkinov
2011-10-30 21:26 ` Henrique de Moraes Holschuh
2011-10-30 21:51   ` Artem S. Tashkinov [this message]
2011-10-31  9:16     ` Henrique de Moraes Holschuh
2011-10-31  9:40       ` Artem S. Tashkinov
2011-10-31 11:58         ` Henrique de Moraes Holschuh
2011-11-01  4:14           ` Zhu Yanhai
2011-11-01  5:15         ` ffab ffa
2011-10-31 18:59   ` Chris Friesen
2011-11-01  6:01     ` Mike Galbraith
2011-10-30 22:12 ` Arjan van de Ven
2011-10-30 22:29   ` Artem S. Tashkinov
2011-10-31  3:19     ` Yong Zhang
2011-10-31  8:18       ` Artem S. Tashkinov
2011-10-31 10:06 ` Con Kolivas
2011-10-31 11:42   ` Mike Galbraith
2011-11-01  0:41     ` Con Kolivas
2011-11-01  0:58       ` Gene Heskett
2011-11-01  5:08       ` Mike Galbraith
2011-11-03  8:18 ` Ingo Molnar
2011-11-03  9:44   ` Artem S. Tashkinov
2011-11-03 10:29     ` Ingo Molnar
2011-11-03 12:42     ` Henrique de Moraes Holschuh
2011-11-03 13:06       ` Artem S. Tashkinov
2011-11-03 13:00   ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=815860869.50724.1320011477430.JavaMail.mail@webmail17 \
    --to=t.artem@lycos.com \
    --cc=hmh@hmh.eng.br \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox