public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
To: linux-ia64@vger.kernel.org
Subject: RE: ia64 get_mmu_context patch
Date: Fri, 28 Oct 2005 17:56:40 +0000	[thread overview]
Message-ID: <200510281756.j9SHueg23170@unix-os.sc.intel.com> (raw)
In-Reply-To: <200510271728.j9RHScS0002221922@kitche.zk3.dec.com>

Peter Keilty wrote on Friday, October 28, 2005 7:50 AM
> The original code did use full range,

Yes or no, first call to wrap_mmu_context occurs at
ia64_ctx.next equals 2^15 (32768), then second call occurs
at 2097152.  First one is needlessly too early.  One can say
it uses full range.  But the real thing I'm after is number
of ctx_id allocation in between global tlb flush.  Kernel
did a global flush before entire 2M ctx_id is used.  That
is not a "full range" (as in use up all ctx_id before a
global tlb flush).


> but once wrapping
> occurred yes ranging was used by setting limit. The ranging
> did go out to the the max_limit on follow on calls, but the
> range size could small Causing more calls to wrap_mmu_context.

Exactly, ia64_ctx.next can only increment, while ia64_ctx.limit
will move down.  The code is effectively find_next_hole(), which
isn't equivalent to find_largest_hole().  It could have a
pathological worst case that you call wrap_mmu_context with only
one ctx_id allocation.  Worse, since next and limit pair can not
cross a wrap around point, when next approach the end of the ctx_id
space, the range it find is much smaller at that instance (though
that should only occur once every 2M ctx id allocation).


> > Was the lock contention because of much more frequent 
> > wrap_mmu_context?
> 
> Indirectly, The real reason was the time used to task_list walking 
> derefencing pointers (tens of thousands of processes) trying to find 
> a unused rid. 

What is the average number of ctx_id allocation between
wrap_mmu_context call?  That would tell us how efficient the current
find_next_hole() is.

- Ken


  parent reply	other threads:[~2005-10-28 17:56 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-10-27 17:28 ia64 get_mmu_context patch Peter Keilty
2005-10-28  2:54 ` Chen, Kenneth W
2005-10-28  3:09 ` Chen, Kenneth W
2005-10-28  3:23 ` Chen, Kenneth W
2005-10-28 14:49 ` Peter Keilty
2005-10-28 14:50 ` Peter Keilty
2005-10-28 17:56 ` Chen, Kenneth W [this message]
2005-10-28 17:59 ` Chen, Kenneth W
2005-10-28 18:06 ` Chen, Kenneth W
2005-10-28 18:40 ` Chen, Kenneth W
2005-10-28 18:49 ` Peter Keilty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200510281756.j9SHueg23170@unix-os.sc.intel.com \
    --to=kenneth.w.chen@intel.com \
    --cc=linux-ia64@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox