From: Amir Vadai <amirv@mellanox.com>
To: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: "David S. Miller" <davem@davemloft.net>,
<linux-pm@vger.kernel.org>, <netdev@vger.kernel.org>,
Pavel Machek <pavel@ucw.cz>, Len Brown <len.brown@intel.com>,
<yuvali@mellanox.com>, Or Gerlitz <ogerlitz@mellanox.com>,
Yevgeny Petrilin <yevgenyp@mellanox.com>, <idos@mellanox.com>,
<hadarh@mellanox.com>
Subject: Re: [RFC 1/2] pm: Introduce QoS requests per CPU
Date: Wed, 26 Mar 2014 17:40:15 +0200 [thread overview]
Message-ID: <5332F4DF.2030106@mellanox.com> (raw)
In-Reply-To: <2896374.SoOPVJXu9Q@vostro.rjw.lan>
[This mail might be double posted due to problems I have with the mail
server]
On 25/03/14 19:44 +0100, Rafael J. Wysocki wrote:
> On Tuesday, March 25, 2014 03:18:24 PM Amir Vadai wrote:
> > Extend the current pm_qos_request API - to have pm_qos_request per core.
> > When a global request is added, it is added under the global plist.
> > When a core specific request is added, it is added to the core specific
> > list.
> > core number is saved in the request and later modify/delete operations
> > are using it to access the right list.
> >
> > When a cpu specific request is added/removed/updated, the target value
> > of the specific core is recalculated to be the min/max (according to the
> > constrain type) value of all the global and the cpu specific
> > constraints.
> >
> > If a global request is added/removed/updated, the target values of all
> > the cpu's are recalculated.
> >
> > During initialization, before the cpu specific data structures are
> > allocated and initialized, only global target value is begin used.
>
> I have to review this in detail (which rather won't be possible before
> the next week), but in principle I don't really like it, because it
> assumes that its users will know what's going to run on which CPU cores
> and I'm not sure where that knowledge is going to come from.
>
The network driver can use affinity hint and irq balancer to set the
affinity of IRQ (and rx ring) to a core. A stream is tied to an rx
ring by RSS/Flow steering. So, if we have an active flow, we can know
which CPU will be used.
The feature could be turned off by default and be used only when the
driver has all the needed information.
Thank you for your time,
Amir
next prev parent reply other threads:[~2014-03-26 15:40 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-25 13:18 [RFC 0/2] pm,net: Introduce QoS requests per CPU Amir Vadai
2014-03-25 13:18 ` [RFC 1/2] pm: " Amir Vadai
2014-03-25 18:44 ` Rafael J. Wysocki
2014-03-26 15:40 ` Amir Vadai [this message]
2014-03-26 17:36 ` Jeremy Eder
2014-03-27 19:41 ` Amir Vadai
2014-03-25 13:18 ` [RFC 2/2] net/mlx4_en: Use pm_qos API to avoid packet loss in high CPU c-states Amir Vadai
2014-03-25 15:14 ` [RFC 0/2] pm,net: Introduce QoS requests per CPU Eric Dumazet
2014-03-25 22:47 ` Ben Hutchings
2014-03-26 7:12 ` Yevgeny Petrilin
2014-03-26 15:42 ` Amir Vadai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5332F4DF.2030106@mellanox.com \
--to=amirv@mellanox.com \
--cc=davem@davemloft.net \
--cc=hadarh@mellanox.com \
--cc=idos@mellanox.com \
--cc=len.brown@intel.com \
--cc=linux-pm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=pavel@ucw.cz \
--cc=rjw@rjwysocki.net \
--cc=yevgenyp@mellanox.com \
--cc=yuvali@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).