From: George Dunlap <george.dunlap@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
xen-devel@lists.xenproject.org
Cc: Anshul Makkar <anshulmakkar@gmail.com>
Subject: Re: [PATCH 3/6] xen: credit: rearrange members of control structures
Date: Fri, 21 Jul 2017 18:02:30 +0100 [thread overview]
Message-ID: <bacb6ad8-e10b-103e-f9df-c3a59a74b65e@citrix.com> (raw)
In-Reply-To: <149821530581.5914.13068641070748575404.stgit@Solace>
On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> With the aim of improving memory size and layout, and
> at the same time trying to put related fields reside
> in the same cacheline.
>
> Here's a summary of the output of `pahole`, with and
> without this patch, for the affected data structures.
>
> csched_pcpu:
> * Before:
> size: 88, cachelines: 2, members: 6
> sum members: 80, holes: 1, sum holes: 4
> padding: 4
> paddings: 1, sum paddings: 5
> last cacheline: 24 bytes
> * After:
> size: 80, cachelines: 2, members: 6
> paddings: 1, sum paddings: 5
> last cacheline: 16 bytes
>
> csched_vcpu:
> * Before:
> size: 72, cachelines: 2, members: 9
> padding: 2
> last cacheline: 8 bytes
> * After:
> same numbers, but move some fields to put
> related fields in same cache line.
>
> csched_private:
> * Before:
> size: 152, cachelines: 3, members: 17
> sum members: 140, holes: 2, sum holes: 8
> padding: 4
> paddings: 1, sum paddings: 5
> last cacheline: 24 bytes
> * After:
> same numbers, but move some fields to put
> related fields in same cache line.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Anshul Makkar <anshulmakkar@gmail.com>
> ---
> xen/common/sched_credit.c | 41 ++++++++++++++++++++++++++---------------
> 1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index efdf6bf..4f6330e 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -169,10 +169,12 @@ integer_param("sched_credit_tslice_ms", sched_credit_tslice_ms);
> struct csched_pcpu {
> struct list_head runq;
> uint32_t runq_sort_last;
> - struct timer ticker;
> - unsigned int tick;
> +
> unsigned int idle_bias;
> unsigned int nr_runnable;
> +
> + unsigned int tick;
> + struct timer ticker;
> };
>
> /*
> @@ -181,13 +183,18 @@ struct csched_pcpu {
> struct csched_vcpu {
> struct list_head runq_elem;
> struct list_head active_vcpu_elem;
> +
> + /* Up-pointers */
> struct csched_dom *sdom;
> struct vcpu *vcpu;
> - atomic_t credit;
> - unsigned int residual;
> +
> s_time_t start_time; /* When we were scheduled (used for credit) */
> unsigned flags;
> - int16_t pri;
> + int pri;
> +
> + atomic_t credit;
> + unsigned int residual;
> +
> #ifdef CSCHED_STATS
> struct {
> int credit_last;
> @@ -219,21 +226,25 @@ struct csched_dom {
> struct csched_private {
> /* lock for the whole pluggable scheduler, nests inside cpupool_lock */
> spinlock_t lock;
> - struct list_head active_sdom;
> - uint32_t ncpus;
> - struct timer master_ticker;
> - unsigned int master;
> +
> cpumask_var_t idlers;
> cpumask_var_t cpus;
> + uint32_t *balance_bias;
> + uint32_t runq_sort;
> + unsigned int ratelimit_us;
> +
> + /* Period of master and tick in milliseconds */
> + unsigned int tslice_ms, tick_period_us, ticks_per_tslice;
> + uint32_t ncpus;
> +
> + struct list_head active_sdom;
> uint32_t weight;
> uint32_t credit;
> int credit_balance;
> - uint32_t runq_sort;
> - uint32_t *balance_bias;
> - unsigned ratelimit_us;
> - /* Period of master and tick in milliseconds */
> - unsigned tslice_ms, tick_period_us, ticks_per_tslice;
> - unsigned credits_per_tslice;
> + unsigned int credits_per_tslice;
> +
> + unsigned int master;
> + struct timer master_ticker;
> };
>
> static void csched_tick(void *_cpu);
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-07-21 17:02 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-23 10:54 [PATCH 0/6] xen: sched: control structure memory layout optimizations Dario Faggioli
2017-06-23 10:54 ` [PATCH 1/6] xen: credit2: allocate runqueue data structure dynamically Dario Faggioli
2017-07-21 16:50 ` George Dunlap
2017-06-23 10:54 ` [PATCH 2/6] xen: credit2: make the cpu to runqueue map per-cpu Dario Faggioli
2017-07-21 16:56 ` George Dunlap
2017-06-23 10:55 ` [PATCH 3/6] xen: credit: rearrange members of control structures Dario Faggioli
2017-07-21 17:02 ` George Dunlap [this message]
2017-06-23 10:55 ` [PATCH 4/6] xen: credit2: " Dario Faggioli
2017-07-21 17:05 ` George Dunlap
2017-07-21 19:53 ` Dario Faggioli
2017-06-23 10:55 ` [PATCH 5/6] xen: RTDS: " Dario Faggioli
2017-07-21 17:06 ` George Dunlap
2017-07-21 17:51 ` Meng Xu
2017-07-21 19:51 ` Dario Faggioli
2017-06-23 10:55 ` [PATCH 6/6] xen: sched: optimize exclusive pinning case (Credit1 & 2) Dario Faggioli
2017-07-21 17:19 ` George Dunlap
2017-07-21 19:55 ` Dario Faggioli
2017-07-21 20:30 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bacb6ad8-e10b-103e-f9df-c3a59a74b65e@citrix.com \
--to=george.dunlap@citrix.com \
--cc=anshulmakkar@gmail.com \
--cc=dario.faggioli@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).