From: Luiz Capitulino <lcapitulino@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Vikas Shivappa <vikas.shivappa@intel.com>,
Tejun Heo <tj@kernel.org>, Yu Fenghua <fenghua.yu@intel.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC] ioctl based CAT interface
Date: Fri, 13 Nov 2015 14:04:38 -0500 [thread overview]
Message-ID: <20151113140438.3d8e2e11@redhat.com> (raw)
In-Reply-To: <20151113172740.GA13490@amt.cnet>
On Fri, 13 Nov 2015 15:27:40 -0200
Marcelo Tosatti <mtosatti@redhat.com> wrote:
> On Fri, Nov 13, 2015 at 05:51:00PM +0100, Peter Zijlstra wrote:
> > On Fri, Nov 13, 2015 at 02:39:33PM -0200, Marcelo Tosatti wrote:
> > > + * * one tcrid entry can be in different locations
> > > + * in different sockets.
> >
> > NAK on that without cpuset integration.
> >
> > I do not want freely migratable tasks having radically different
> > performance profiles depending on which CPU they land.
>
> Please expand on what "cpuset integration" means, operationally.
> I hope it does not mean "i prefer cgroups as an interface",
> because that does not mean much to me.
I guess that what Peter is saying is that we don't want tasks
attached to a reservation landing on a CPU where the reservation
might be different or not existent at all.
Peter, what about integrating this with affinity masks instead
of cpusets (I have no idea how cpusets are implemented, but I
guess they are a superset of affinity masks).
This way, the ATTACH_RESERVATION command would fail if any
of the CPUs in the cpumask are not part of the reservation.
And then our code would have to be notified any time the process'
affinity mask is changed (we either fail the affinity change
or detach the process automatically from the reservation). Does
this sound like a good solution?
>
> So you are saying this should be based on cgroups? Have you seen the
> cgroups proposal and the issues with it, that have been posted?
>
>
next prev parent reply other threads:[~2015-11-13 19:04 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-13 16:39 [PATCH RFC] ioctl based CAT interface Marcelo Tosatti
2015-11-13 16:51 ` Peter Zijlstra
2015-11-13 17:27 ` Marcelo Tosatti
2015-11-13 17:43 ` Marcelo Tosatti
2015-11-16 8:59 ` Peter Zijlstra
2015-11-16 13:03 ` Marcelo Tosatti
2015-11-16 14:42 ` Thomas Gleixner
2015-11-16 19:52 ` Marcelo Tosatti
2015-11-16 15:01 ` Peter Zijlstra
2015-11-16 19:54 ` Marcelo Tosatti
2015-11-16 21:22 ` Marcelo Tosatti
2015-11-13 19:04 ` Luiz Capitulino [this message]
2015-11-13 20:22 ` Marcelo Tosatti
2015-11-16 9:03 ` Peter Zijlstra
2015-11-13 17:33 ` Marcelo Tosatti
2015-11-16 9:07 ` Peter Zijlstra
2015-11-16 14:37 ` Marcelo Tosatti
2015-11-16 15:37 ` Peter Zijlstra
2015-11-16 16:18 ` Luiz Capitulino
2015-11-16 16:26 ` Peter Zijlstra
2015-11-16 16:48 ` Luiz Capitulino
2015-11-16 16:39 ` Marcelo Tosatti
2015-11-17 1:01 ` Marcelo Tosatti
2015-11-13 18:01 ` Marcelo Tosatti
2015-11-16 9:09 ` Peter Zijlstra
2015-11-13 19:08 ` Luiz Capitulino
2015-12-03 21:58 ` Pavel Machek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151113140438.3d8e2e11@redhat.com \
--to=lcapitulino@redhat.com \
--cc=fenghua.yu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vikas.shivappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox