linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Tsutomu OWA <tsutomu.owa@toshiba.co.jp>
Cc: linuxppc-dev@ozlabs.org, Thomas Gleixner <tglx@linutronix.de>,
	mingo@elte.hu, Arnd Bergmann <arnd@arndb.de>,
	linux-kernel@vger.kernel.org
Subject: Re: [patch 4/4] powerpc 2.6.21-rt1: reduce scheduling latency by	changing tlb flush size
Date: Tue, 15 May 2007 18:40:37 +1000	[thread overview]
Message-ID: <1179218437.32247.180.camel@localhost.localdomain> (raw)
In-Reply-To: <yyimz065x1a.wl@toshiba.co.jp>


> >                                                                Have you measured
> > the time it takes ? We might want to modulate the amount based on wether we
> > are using native hash tables or an hypervisor.
> 
>   Yes, here is the trace log.  Accordint it, flushing  9 entries takes about 50us.
> It means that flushing 192 (PPC64_TLB_BATCH_NR) entries takes 1ms.
> Please note that tracing causes *roughly* 20-30% overhead, though.
> 
>   I'm afraid I don't have numbers w/ native version, but I suppose you have :)

Actually I don't off hand but I'll try to get some.

> # cat /proc/latency_trace
> preemption latency trace v1.1.5 on 2.6.21-rt1
> --------------------------------------------------------------------
>  latency: 60 us, #53/53, CPU#0 | (M:rt VP:0, KP:0, SP:1 HP:1 #P:1)
>     -----------------
>     | task: inetd-358 (uid:0 nice:0 policy:0 rt_prio:0)
>     -----------------
>  => started at: .__switch_to+0xa8/0x178 <c00000000000feec>
>  => ended at:   .__start+0x4000000000000000/0x8 <00000000>
> 
>                  _------=> CPU#            
>                 / _-----=> irqs-off        
>                | / _----=> need-resched    
>                || / _---=> hardirq/softirq 
>                ||| / _--=> preempt-depth   
>                |||| /                      
>                |||||     delay             
>    cmd     pid ||||| time  |   caller      
>       \   /    |||||   \   |   /           
>    inetd-358   0D..2    0us : .user_trace_start (.__switch_to)
>    inetd-358   0D..2    0us : .rt_up (.user_trace_start)
>    inetd-358   0D..3    1us : .rt_mutex_unlock (.rt_up)
>    inetd-358   0D..3    2us : .__flush_tlb_pending (.__switch_to)
>    inetd-358   0D..4    3us : .flush_hash_range (.__flush_tlb_pending)
>    inetd-358   0D..4    3us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4    4us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4    5us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5    6us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   13us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   13us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   14us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   14us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   15us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   18us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   19us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   20us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   20us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   21us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   24us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   25us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   25us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   26us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   26us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   30us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   30us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   31us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   31us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   32us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   36us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   36us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   37us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   37us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   38us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   41us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   42us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   42us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   43us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   43us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   47us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   48us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   48us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   49us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   49us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   52us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..4   53us : .flush_hash_page (.flush_hash_range)
>    inetd-358   0D..4   54us : .beat_lpar_hpte_invalidate (.flush_hash_page)
>    inetd-358   0D..4   54us : .__spin_lock_irqsave (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   55us+: .beat_lpar_hpte_getword0 (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..5   58us : .__spin_unlock_irqrestore (.beat_lpar_hpte_invalidate)
>    inetd-358   0D..2   59us : .user_trace_stop (.__switch_to)
>    inetd-358   0D..2   60us : .user_trace_stop (.__switch_to)
> -- owa

  reply	other threads:[~2007-05-15  8:40 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-14  6:22 [patch 0/4] powerpc 2.6.21-rt1: fix build a breakage and some minor issues Tsutomu OWA
2007-05-14  6:26 ` [patch 1/4] powerpc 2.6.21-rt1: fix a build breakage by adding __raw_*_relax() macros Tsutomu OWA
2007-05-14  6:28 ` [patch 2/4] powerpc 2.6.21-rt1: convert spinlocks to raw ones for Celleb Tsutomu OWA
2007-05-14  6:29 ` [patch 3/4] powerpc 2.6.21-rt1: add a need_resched_delayed() check Tsutomu OWA
2007-05-14  6:30 ` [patch 0/4] powerpc 2.6.21-rt1: reduce scheduling latency by changing tlb flush size Tsutomu OWA
2007-05-14  6:38   ` [patch 4/4] " Tsutomu OWA
2007-05-14  6:51     ` Thomas Gleixner
2007-05-14  7:28       ` Tsutomu OWA
2007-05-14 14:40         ` Arnd Bergmann
2007-05-15  4:12           ` Tsutomu OWA
2007-05-15  7:34             ` Benjamin Herrenschmidt
2007-05-15  6:27           ` Tsutomu OWA
2007-05-15  7:38             ` Benjamin Herrenschmidt
2007-05-15  8:08               ` Tsutomu OWA
2007-05-15  8:40                 ` Benjamin Herrenschmidt [this message]
2007-05-15  7:23           ` Benjamin Herrenschmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1179218437.32247.180.camel@localhost.localdomain \
    --to=benh@kernel.crashing.org \
    --cc=arnd@arndb.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    --cc=tsutomu.owa@toshiba.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).