netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shirley Ma <mashirle@us.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	netdev@vger.kernel.org, sri@us.ibm.com, vivek@us.ibm.com
Subject: Re: [RFC PATCH 1/1] fair.c: Add/Export find_idlest_perfer_cpu API
Date: Mon, 27 Aug 2012 12:07:34 -0700	[thread overview]
Message-ID: <1346094454.4311.10.camel@oc3660625478.ibm.com> (raw)
In-Reply-To: <1345532820.23018.81.camel@twins>

On Tue, 2012-08-21 at 09:07 +0200, Peter Zijlstra wrote:
> On Mon, 2012-08-20 at 15:17 -0700, Shirley Ma wrote:
> > On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> > > On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > > > Add/Export a new API for per-cpu thread model networking device
> > > driver
> > > > to choose a preferred idlest cpu within allowed cpumask.
> > > > 
> > > > The receiving CPUs of a networking device are not under cgroup
> > > controls.
> > > > Normally the receiving work will be scheduled on the cpu on
> which
> > > the
> > > > interrupts are received. When such a networking device uses
> per-cpu
> > > > thread model, the cpu which is chose to process the packets
> might
> > > not be
> > > > part of cgroup cpusets without using such an API here. 
> > > > 
> > > > On NUMA system, by using the preferred cpumask from the same
> NUMA
> > > node
> > > > would help to reduce expensive cross memory access to/from the
> other
> > > > NUMA node.
> > > > 
> > > > KVM per-cpu vhost will be the first one to use this API. Any
> other
> > > > device driver which uses per-cpu thread model and has cgroup
> cpuset
> > > > control will use this API later.
> > > 
> > > How often will this be called and how do you obtain the cpumasks
> > > provided to the function? 
> > 
> > It depends. It might be called pretty often if the user keeps
> changing
> > cgroups control cpuset. It might be less called if the cgroups
> control
> > cpuset is stable, and the host scheduler always schedules the work
> on
> > the same NUMA node.
> This just doesn't make any sense, you're scanning for the least loaded
> cpu, this is unrelated to a change in cpuset. So tying the scan
> frequency to changes in configuration is just broken.

Thanks for your review. I am just back from my vacation. 

Why not? the caller knows the cpuset changes, and pass the right NUMA
node to choose the idlest cpu from that NUMA node. Practically, the VMs
don't change the cgroups. So it will not frequency to change the
configuration.

> > The preferred cpumasks are obtained from local numa node.
> 
> So why pass it as argument at all? Also, who says the current node is
> the right one? It might just be running there temporarily.

It leaves to the caller to make the right node choice. It tries to avoid
VMs running on the same cpu but on the same node with the host to
process the guest network packets.

> >  The allowed
> > cpumasks are obtained from caller's task allowed cpumasks (cgroups
> > control cpuset).
> 
> task->cpus_allowed != cpusets.. Also, since you're using
> task->cpus_allowed, pass a task_struct *, not a cpumask. 

Based on the documentation I read before, I thought the cpus_allowed ==
cgroups control cpuset. If not, where are the cgroups control cpusets
saved?

task->cpus_allowed = tsk_cpus_allowed(task_struct *p), which is
cpumask_t.

I can change the argument from cpumask to task_struct *, and call
tsk_cpus_allowed() instead of using task->cpus_allowed.

Thanks
Shirley

      reply	other threads:[~2012-08-27 19:07 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-17 19:46 [RFC PATCH 1/1] fair.c: Add/Export find_idlest_perfer_cpu API Shirley Ma
2012-08-20 12:00 ` Peter Zijlstra
2012-08-20 22:17   ` Shirley Ma
2012-08-21  7:07     ` Peter Zijlstra
2012-08-27 19:07       ` Shirley Ma [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1346094454.4311.10.camel@oc3660625478.ibm.com \
    --to=mashirle@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=sri@us.ibm.com \
    --cc=vivek@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).