public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Shirley Ma <mashirle@us.ibm.com>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	netdev@vger.kernel.org, sri@us.ibm.com, vivek@us.ibm.com
Subject: Re: [RFC PATCH 1/1] fair.c: Add/Export find_idlest_perfer_cpu API
Date: Tue, 21 Aug 2012 09:07:00 +0200	[thread overview]
Message-ID: <1345532820.23018.81.camel@twins> (raw)
In-Reply-To: <1345501041.6378.10.camel@oc3660625478.ibm.com>

On Mon, 2012-08-20 at 15:17 -0700, Shirley Ma wrote:
> On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> > On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > > Add/Export a new API for per-cpu thread model networking device
> > driver
> > > to choose a preferred idlest cpu within allowed cpumask.
> > > 
> > > The receiving CPUs of a networking device are not under cgroup
> > controls.
> > > Normally the receiving work will be scheduled on the cpu on which
> > the
> > > interrupts are received. When such a networking device uses per-cpu
> > > thread model, the cpu which is chose to process the packets might
> > not be
> > > part of cgroup cpusets without using such an API here. 
> > > 
> > > On NUMA system, by using the preferred cpumask from the same NUMA
> > node
> > > would help to reduce expensive cross memory access to/from the other
> > > NUMA node.
> > > 
> > > KVM per-cpu vhost will be the first one to use this API. Any other
> > > device driver which uses per-cpu thread model and has cgroup cpuset
> > > control will use this API later.
> > 
> > How often will this be called and how do you obtain the cpumasks
> > provided to the function? 
> 
> It depends. It might be called pretty often if the user keeps changing
> cgroups control cpuset. It might be less called if the cgroups control
> cpuset is stable, and the host scheduler always schedules the work on
> the same NUMA node.

This just doesn't make any sense, you're scanning for the least loaded
cpu, this is unrelated to a change in cpuset. So tying the scan
frequency to changes in configuration is just broken.

> The preferred cpumasks are obtained from local numa node.

So why pass it as argument at all? Also, who says the current node is
the right one? It might just be running there temporarily.

>  The allowed
> cpumasks are obtained from caller's task allowed cpumasks (cgroups
> control cpuset).

task->cpus_allowed != cpusets.. Also, since you're using
task->cpus_allowed, pass a task_struct *, not a cpumask.



  reply	other threads:[~2012-08-21  7:07 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-17 19:46 [RFC PATCH 1/1] fair.c: Add/Export find_idlest_perfer_cpu API Shirley Ma
2012-08-20 12:00 ` Peter Zijlstra
2012-08-20 22:17   ` Shirley Ma
2012-08-21  7:07     ` Peter Zijlstra [this message]
2012-08-27 19:07       ` Shirley Ma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1345532820.23018.81.camel@twins \
    --to=peterz@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mashirle@us.ibm.com \
    --cc=mingo@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=sri@us.ibm.com \
    --cc=vivek@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox