linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Wang <wangyun@linux.vnet.ibm.com>
To: Rakib Mullick <rakib.mullick@gmail.com>
Cc: mingo@kernel.org, peterz@infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] sched: update_top_cache_domain only at the times of building sched domain.
Date: Wed, 24 Jul 2013 11:26:33 +0800	[thread overview]
Message-ID: <51EF4969.4050807@linux.vnet.ibm.com> (raw)
In-Reply-To: <1374601332.9192.0.camel@localhost.localdomain>

Hi, Rakib

On 07/24/2013 01:42 AM, Rakib Mullick wrote:
> Currently, update_top_cache_domain() is called whenever schedule domain is built or destroyed. But, the following
> callpath shows that they're at the same callpath and can be avoided update_top_cache_domain() while destroying schedule
> domain and update only at the times of building schedule domains.
> 
> 	partition_sched_domains()
> 		detach_destroy_domain()
> 			cpu_attach_domain()
> 				update_top_cache_domain()

IMHO, cpu_attach_domain() and update_top_cache_domain() should be
paired, below patch will open a window which 'rq->sd == NULL' while
'sd_llc != NULL', isn't it?

I don't think we have the promise that before we rebuild the stuff
correctly, no one will utilize 'sd_llc'...

Further more, what will happen if the old sd was freed after next rcu
work cycle while 'sd_llc' still hold the reference for some victims?

Thus I do suggest we leave the things untouched since the benefit we get
is too less, not worth the risk...

Regards,
Michael Wang

> 		build_sched_domains()
> 			cpu_attach_domain()
> 				update_top_cache_domain()
> 
> Changes since v1: use sd to determine when to skip, courtesy PeterZ
> 
> Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
> ---
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b7c32cb..387fb66 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5138,7 +5138,8 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
>  	rcu_assign_pointer(rq->sd, sd);
>  	destroy_sched_domains(tmp, cpu);
> 
> -	update_top_cache_domain(cpu);
> +	if (sd)
> +		update_top_cache_domain(cpu);
>  }
> 
>  /* cpus with isolated domains */
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


  reply	other threads:[~2013-07-24  3:26 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-23 17:42 [PATCH v2] sched: update_top_cache_domain only at the times of building sched domain Rakib Mullick
2013-07-24  3:26 ` Michael Wang [this message]
2013-07-24  8:01   ` Rakib Mullick
2013-07-24  8:34     ` Michael Wang
2013-07-24 10:49       ` Peter Zijlstra
2013-07-25  2:49         ` Michael Wang
2013-07-24 13:57       ` Rakib Mullick
2013-07-25  3:15         ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51EF4969.4050807@linux.vnet.ibm.com \
    --to=wangyun@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rakib.mullick@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).