xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Justin T. Weaver" <jtweaver@hawaii.edu>
Cc: george.dunlap@eu.citrix.com, henric@hawaii.edu, xen-devel@lists.xen.org
Subject: Re: [PATCH v3 0/4] sched: credit2: introduce per-vcpu hard and soft affinity
Date: Thu, 17 Sep 2015 16:27:20 +0200	[thread overview]
Message-ID: <1442500040.15327.87.camel@citrix.com> (raw)
In-Reply-To: <1427363314-25430-1-git-send-email-jtweaver@hawaii.edu>


[-- Attachment #1.1: Type: text/plain, Size: 2080 bytes --]

On Wed, 2015-03-25 at 23:48 -1000, Justin T. Weaver wrote:

> Here are the results I gathered from testing. Each guest had 2 vcpus and 1GB
> of memory. 
>
Hey, thanks for doing the benchmarking as well! :-)

> The hardware consisted of two quad core Intel Xeon X5570 processors
> and 8GB of RAM per node. The sysbench memory test was run with the num-threads
> option set to four, and was run simultaneously on two, then six, then ten VMs.
> Each result below is an average of three runs.
> 
> -------------------------------------------------------
> | Sysbench memory, throughput MB/s (higher is better) |
> -------------------------------------------------------
> | #VMs |  No affinity  |   Pinning  | NUMA scheduling |
> |   2  |    417.01     |    406.16  |     428.83      |
> |   6  |    389.31     |    407.07  |     402.90      |
> |  10  |    317.91     |    320.53  |     321.98      |
> -------------------------------------------------------
> 
> Despite the overhead added, NUMA scheduling performed best in both the two and
> ten VM tests.
> 
Nice. Just to be sure, is my understending of the columns label
accurate?
 - 'No affinity'     == no hard nor soft affinity for any VM
 - 'Pinning'         == hard affinity used to pin VMs to NUMA nodes
                        (evenly, I guess?); soft affinity untouched
 - 'NUMA scheduling' == soft affinity used to associate VMs to NUMA
                        nodes (evenly, I guess?); hard affinity
                        untouched

Also, can you confirm that all the hard and soft affinity setting were
done at VM creation time, i.e., they were effectively influencing where
the memory of the VMs was being allocated? (It looks like so, from the
number, but I wanted to be sure...)

Thanks again and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2015-09-17 14:27 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-26  9:48 [PATCH v3 0/4] sched: credit2: introduce per-vcpu hard and soft affinity Justin T. Weaver
2015-03-26  9:48 ` [PATCH v3 1/4] sched: credit2: respect per-vcpu hard affinity Justin T. Weaver
2015-03-31 14:37   ` George Dunlap
2015-03-31 17:14     ` Dario Faggioli
2015-03-31 17:32       ` George Dunlap
2015-04-23 16:00     ` Dario Faggioli
2015-05-06 12:39   ` Dario Faggioli
2015-03-26  9:48 ` [PATCH v3 2/4] sched: factor out per-vcpu affinity related code to common header file Justin T. Weaver
2015-04-23 15:22   ` Dario Faggioli
2015-03-26  9:48 ` [PATCH v3 3/4] sched: credit2: indent code sections to make review of patch 4/4 easier Justin T. Weaver
2015-04-23 15:35   ` Dario Faggioli
2015-03-26  9:48 ` [PATCH v3 4/4] sched: credit2: consider per-vcpu soft affinity Justin T. Weaver
2015-03-31 17:38   ` George Dunlap
2015-04-20 15:38   ` George Dunlap
2015-04-22 16:16   ` George Dunlap
2015-09-17 14:27 ` Dario Faggioli [this message]
2015-09-17 15:15   ` [PATCH v3 0/4] sched: credit2: introduce per-vcpu hard and " Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1442500040.15327.87.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=henric@hawaii.edu \
    --cc=jtweaver@hawaii.edu \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).