public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
To: linux-ia64@vger.kernel.org
Subject: RE:  ia64 get_mmu_context patch
Date: Fri, 28 Oct 2005 02:54:34 +0000	[thread overview]
Message-ID: <200510280254.j9S2sYg12254@unix-os.sc.intel.com> (raw)
In-Reply-To: <200510271728.j9RHScS0002221922@kitche.zk3.dec.com>

Peter Keilty wrote on Thursday, October 27, 2005 10:28 AM
> Please find attached IA64 context_id patch and supporting data for your
> Review and consideration.
>  ...
> Lockstat Data:
> There are 4 sets of lockstat data, one each for loads of 40K,
> 30K, 20K and 40K with no fork test. The lockstat data shows
> that as loading increases the lock contention on the task
> lock with wrap_mmu_context and higher utilization of the
> ia64_ctx lock and the ia64_global_tlb_purge lock. 


Current implementation in wrap_mmu_context did not fully utilize
all the rid space at the time of wrap.  It finds first available
free range starting from ia64_ctx.next, presumably much smaller
than max_ctx.

Was the lock contention because of much more frequent wrap_mmu_context?
Ideally, it should only do one wrap when the entire rid space is
exhausted.  Current implementation in wrap_mmu_context is suboptimal
in performance.


>  wrap_mmu_context (struct mm_struct *mm)
>  { ....
> @@ -52,28 +74,23 @@
>  	ia64_ctx.limit = max_ctx + 1;
>  
>  	/*
> -	 * Scan all the task's mm->context and set proper safe range
> +	 * Scan the ia64_ctx bitmap and set proper safe range
>  	 */
> +repeat:
> +	next_ctx = find_next_zero_bit(ia64_ctx.bitmap, ia64_ctx.limit, ia64_ctx.next);
> +	if (next_ctx >= ia64_ctx.limit) {
> +		smp_mb();
> +		ia64_ctx.next = 300;	/* skip daemons */
> +		goto repeat;
> +	}
> +	ia64_ctx.next = next_ctx;

I like the bitmap thing.  But what's up with all this old range
finding code doing here?  You have a full bitmap that tracks used
ctx_id, one more bitmap can be added to track pending flush. Then at
the time of wrap, we can simply xor them to get full reusable rid.
With that, kernel will only wrap when entire rid space is exhausted.
I will post a patch.

- Ken


  reply	other threads:[~2005-10-28  2:54 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-10-27 17:28 ia64 get_mmu_context patch Peter Keilty
2005-10-28  2:54 ` Chen, Kenneth W [this message]
2005-10-28  3:09 ` Chen, Kenneth W
2005-10-28  3:23 ` Chen, Kenneth W
2005-10-28 14:49 ` Peter Keilty
2005-10-28 14:50 ` Peter Keilty
2005-10-28 17:56 ` Chen, Kenneth W
2005-10-28 17:59 ` Chen, Kenneth W
2005-10-28 18:06 ` Chen, Kenneth W
2005-10-28 18:40 ` Chen, Kenneth W
2005-10-28 18:49 ` Peter Keilty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200510280254.j9S2sYg12254@unix-os.sc.intel.com \
    --to=kenneth.w.chen@intel.com \
    --cc=linux-ia64@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox