xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <vlad.babchuk@gmail.com>
Cc: Artem_Mygaiev@epam.com, xen-devel@lists.xensource.com,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: Re: Notes on stubdoms and latency on ARM
Date: Thu, 20 Jul 2017 11:10:57 +0200	[thread overview]
Message-ID: <1500541857.20438.6.camel@citrix.com> (raw)
In-Reply-To: <3121c88c-fbda-a494-ce91-b06fa0fc10f3@citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 3354 bytes --]

On Mon, 2017-07-17 at 12:28 +0100, George Dunlap wrote:
> Most schedulers have one runqueue per logical cpu.  Credit2 has the
> option of having one runqueue per logical cpu, one per core (i.e.,
> hyperthreads share a runqueue), one runqueue per socket (i.e., all
> cores
> on the same socket share a runqueue), or one socket across the whole
> system.  
>
You mean "or one runqueue across the whole system", I guess? :-)

> I *think* we made one socket per core the default a while back
> to deal with multithreading, but I may not be remembering correctly.
> 
We've have per-core runqueue as default, to deal with hyperthreading
for some time. Nowadays, handling hyperthreading is done independently
by runqueue arrangement, and so the current default is one runqueue
per-socket.

> In any case, if you don't have threads, then reporting each logical
> cpu as its own core is the right thing to do.
> 
Yep.

> If you're mis-reporting sockets, then the scheduler will be unable to
> take that into account.  
>
And if this means that each logical CPU is also reported as being its
own socket, then you have one runqueue per logical CPU.

> But that's not usually going to be a major
> issue, mainly because the scheduler is not actually in a position to
> determine, most of the time, which is the optimal configuration.  If
> two
> vcpus are communicating a lot, then the optimal configuration is to
> put
> them on different cores of the same socket (so they can share an L3
> cache); if two vcpus are computing independently, then the optimal
> configuration is to put them on different sockets, so they can each
> have
> their own L3 cache. 
>
This is all very true. However, if two CPUs share one runqueue, vCPUs
will seamlessly move between the two CPUs, without having to wait for
the load balancing logic to kick in. This is a rather cheap way of
achieving good fairness and load balancing, but is only effective if
this movement is also cheap, which, e.g., is probably the case if the
CPUs share some level of cache.

So, figuring out what the best runqueue arrangement is, is rather hard
to do automatically, as it depends both on the workload and on the
hardware characteristics of the platform, but having at last some
degree of runqueue sharing, among the CPUs that have some cache levels
in common, would be, IMO, our best bet.

And we do need topology information to try to do that. (We would also
need, in Credit2 code, to take more into account cache and memory
hierarchy information, rather than "just" CPU topology. We're already
working, for instance, of changing CSCHED2_MIGRATE_RESIST from being
constant, to vary depending on the amount of cache-sharing between two
CPUs.)

> All that to say: It shouldn't be a major issue if you are mis-
> reporting
> sockets. :-)
> 
Maybe yes, maybe not. It may actually be even better on some
combination of platforms and workloads, indeed... but it also means
that the Credit2 load balancer is being invoked a lot, which may be
unideal.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-07-20  9:10 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-18 19:00 Notes on stubdoms and latency on ARM Stefano Stabellini
2017-05-19 19:45 ` Volodymyr Babchuk
2017-05-22 21:41   ` Stefano Stabellini
2017-05-26 19:28     ` Volodymyr Babchuk
2017-05-30 17:29       ` Stefano Stabellini
2017-05-30 17:33         ` Julien Grall
2017-06-01 10:28           ` Julien Grall
2017-06-17  0:17             ` Volodymyr Babchuk
2017-05-31  9:09         ` George Dunlap
2017-05-31 15:53           ` Dario Faggioli
2017-05-31 16:17             ` Volodymyr Babchuk
2017-05-31 17:45           ` Stefano Stabellini
2017-06-01 10:48             ` Julien Grall
2017-06-01 10:52             ` George Dunlap
2017-06-01 10:54               ` George Dunlap
2017-06-01 12:40               ` Dario Faggioli
2017-06-01 15:02                 ` George Dunlap
2017-06-01 18:27               ` Stefano Stabellini
2017-05-31 17:02       ` George Dunlap
2017-06-17  0:14         ` Volodymyr Babchuk
2017-06-19  9:37           ` George Dunlap
2017-06-19 17:54             ` Stefano Stabellini
2017-06-19 18:36               ` Volodymyr Babchuk
2017-06-20 10:11                 ` Dario Faggioli
2017-07-07 15:02                   ` Volodymyr Babchuk
2017-07-07 16:41                     ` Dario Faggioli
2017-07-07 17:03                       ` Volodymyr Babchuk
2017-07-07 21:12                         ` Stefano Stabellini
2017-07-12  6:14                           ` Dario Faggioli
2017-07-17  9:25                             ` George Dunlap
2017-07-17 10:04                               ` Julien Grall
2017-07-17 11:28                                 ` George Dunlap
2017-07-19 11:21                                   ` Julien Grall
2017-07-20  9:25                                     ` Dario Faggioli
2017-07-20  9:10                                   ` Dario Faggioli [this message]
2017-07-20  8:49                               ` Dario Faggioli
2017-07-08 14:26                         ` Dario Faggioli
2017-06-20 10:45                 ` Julien Grall
2017-06-20 16:23                   ` Volodymyr Babchuk
2017-06-21 10:38                     ` Julien Grall
2017-06-19 18:26             ` Volodymyr Babchuk
2017-06-20 10:00               ` Dario Faggioli
2017-06-20 10:30                 ` George Dunlap
2017-05-23  7:11   ` Dario Faggioli
2017-05-26 20:09     ` Volodymyr Babchuk
2017-05-27  2:10       ` Dario Faggioli
2017-05-23  9:08   ` George Dunlap
2017-05-26 19:43     ` Volodymyr Babchuk
2017-05-26 19:46       ` Volodymyr Babchuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1500541857.20438.6.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=Artem_Mygaiev@epam.com \
    --cc=andrii_anisov@epam.com \
    --cc=george.dunlap@citrix.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=vlad.babchuk@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).