xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Praveen Kumar <kpraveen.lkml@gmail.com>, george.dunlap@eu.citrix.com
Cc: xen-devel@lists.xen.org
Subject: Re: [PATCH] xen: credit2: enable per cpu runqueue creation
Date: Fri, 9 Jun 2017 18:47:21 +0200	[thread overview]
Message-ID: <1497026841.26212.15.camel@citrix.com> (raw)
In-Reply-To: <20170411161517.1800-1-kpraveen.lkml@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 4363 bytes --]

On Tue, 2017-04-11 at 21:45 +0530, Praveen Kumar wrote:
> The patch introduces a new command line option 'cpu' that when used
> will create
> runqueue per logical pCPU. This may be useful for small systems, and
> also for
> development, performance evalution and comparison.
> 
> Signed-off-by: Praveen Kumar <kpraveen.lkml@gmail.com>
> Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>
> 
Hey Geoge,

I don't see this patch in staging, nor I think you've commented on it.

IIRC, it was sent very close to feature freeze... So, is it possible
that it fell through some crack? :-)

Any thoughts about it? If not, what about applying? :-D

Thanks and Regards,
Dario

> ---
>  docs/misc/xen-command-line.markdown |  3 ++-
>  xen/common/sched_credit2.c          | 15 +++++++++++----
>  2 files changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-
> command-line.markdown
> index 5815d87dab..6e73766574 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -525,7 +525,7 @@ also slow in responding to load changes.
>  The default value of `1 sec` is rather long.
>  
>  ### credit2\_runqueue
> -> `= core | socket | node | all`
> +> `= cpu | core | socket | node | all`
>  
>  > Default: `socket`
>  
> @@ -536,6 +536,7 @@ balancing (for instance, it will deal better with
> hyperthreading),
>  but also more overhead.
>  
>  Available alternatives, with their meaning, are:
> +* `cpu`: one runqueue per each logical pCPUs of the host;
>  * `core`: one runqueue per each physical core of the host;
>  * `socket`: one runqueue per each physical socket (which often,
>              but not always, matches a NUMA node) of the host;
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index bb1c657e76..ee7b443f9e 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -301,6 +301,9 @@ integer_param("credit2_balance_over",
> opt_overload_balance_tolerance);
>   * want that to happen basing on topology. At the moment, it is
> possible
>   * to choose to arrange runqueues to be:
>   *
> + * - per-cpu: meaning that there will be one runqueue per logical
> cpu. This
> + *            will happen when if the opt_runqueue parameter is set
> to 'cpu'.
> + *
>   * - per-core: meaning that there will be one runqueue per each
> physical
>   *             core of the host. This will happen if the
> opt_runqueue
>   *             parameter is set to 'core';
> @@ -322,11 +325,13 @@ integer_param("credit2_balance_over",
> opt_overload_balance_tolerance);
>   * either the same physical core, the same physical socket, the same
> NUMA
>   * node, or just all of them, will be put together to form
> runqueues.
>   */
> -#define OPT_RUNQUEUE_CORE   0
> -#define OPT_RUNQUEUE_SOCKET 1
> -#define OPT_RUNQUEUE_NODE   2
> -#define OPT_RUNQUEUE_ALL    3
> +#define OPT_RUNQUEUE_CPU    0
> +#define OPT_RUNQUEUE_CORE   1
> +#define OPT_RUNQUEUE_SOCKET 2
> +#define OPT_RUNQUEUE_NODE   3
> +#define OPT_RUNQUEUE_ALL    4
>  static const char *const opt_runqueue_str[] = {
> +    [OPT_RUNQUEUE_CPU] = "cpu",
>      [OPT_RUNQUEUE_CORE] = "core",
>      [OPT_RUNQUEUE_SOCKET] = "socket",
>      [OPT_RUNQUEUE_NODE] = "node",
> @@ -682,6 +687,8 @@ cpu_to_runqueue(struct csched2_private *prv,
> unsigned int cpu)
>          BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID ||
>                 cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
>  
> +        if (opt_runqueue == OPT_RUNQUEUE_CPU)
> +            continue;
>          if ( opt_runqueue == OPT_RUNQUEUE_ALL ||
>               (opt_runqueue == OPT_RUNQUEUE_CORE &&
> same_core(peer_cpu, cpu)) ||
>               (opt_runqueue == OPT_RUNQUEUE_SOCKET &&
> same_socket(peer_cpu, cpu)) ||
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-06-09 16:47 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-11 16:15 [PATCH] xen: credit2: enable per cpu runqueue creation Praveen Kumar
2017-04-11 16:24 ` George Dunlap
2017-06-09 16:47 ` Dario Faggioli [this message]
2017-07-20 15:49 ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1497026841.26212.15.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=kpraveen.lkml@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).