All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Alexander van Heukelum" <heukelum@fastmail.fm>
To: "Ingo Molnar" <mingo@elte.hu>
Cc: "Cyrill Gorcunov" <gorcunov@gmail.com>,
	"Alexander van Heukelum" <heukelum@mailshack.com>,
	"LKML" <linux-kernel@vger.kernel.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	lguest@ozlabs.org, jeremy@xensource.com,
	"Steven Rostedt" <srostedt@redhat.com>,
	"Mike Travis" <travis@sgi.com>,
	"Andi Kleen" <andi@firstfloor.org>
Subject: Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes
Date: Tue, 04 Nov 2008 17:45:06 +0100	[thread overview]
Message-ID: <1225817106.2795.1282945873@webmail.messagingengine.com> (raw)
In-Reply-To: <20081104163636.GA20534@elte.hu>


On Tue, 4 Nov 2008 17:36:36 +0100, "Ingo Molnar" <mingo@elte.hu> said:
> 
> * Alexander van Heukelum <heukelum@fastmail.fm> wrote:
> 
> > I wonder how the time needed for reading the GDT segments balances 
> > against the time needed due to the extra redirection due to running 
> > the stubs. I'ld be interested if the difference can be measured with 
> > the current implementation. (I really need to highjack a machine to 
> > do some measurements; I hoped someone would do it before I got to it 
> > ;) )
> > 
> > Even if some CPU's have some internal optimization for the case 
> > where the gate segment is the same as the current one, I wonder if 
> > it is really important... Interrupts that occur while the processor 
> > is running userspace already cause changing segments. They are more 
> > likely to be in cache, maybe.
> 
> there are three main factors:
> 
> - Same-value segment loads are optimized on most modern CPUs and can
>   give a few cycles (2-3) advantage. That might or might not apply to 
>   the microcode that does IRQ entry processing. (A cache miss will 
>   increase the cost much more but that is true in general as well)
> 
> - A second effect is that the changed data structure layout: a more
>   compressed GDT entry (6 bytes) against a more spread out (~7 bytes,
>   not aligned) interrupt trampoline. Note that the first one is data 
>   cache the second one is instruction cache - the two have different 
>   sizes, different implementations and different hit/miss pressures. 
>   Generally the instruction-cache is the more precious resource and we 
>   optimize for that first, for data cache second.
> 
> - A third effect is branch prediction: currently we are fanning 
>   out all the vectors into ~240 branches just to recover a single 
>   constant in essence. That is quite wasteful of instruction cache 
>   resources, because from the logic side it's a data constant, not a 
>   control flow difference. (we demultiplex that number into an 
>   interrupt handler later on, but the CPU has no knowledge of that 
>   relationship)
> 
> ... all in one, the situation is complex enough on the CPU 
> architecture side for it to really necessiate a measurement in 
> practice, and that's why i have asked you to do them: the numbers need 
> to go hand in hand with the patch submission.
> 
> My estimation is that if we do it right, your approach will behave 
> better on modern CPUs (which is what matters most for such things), 
> especially on real workloads where there's a considerable 
> instruction-cache pressure. But it should be measured in any case.

Fully agreed. I will do some measurements in the near future, maybe
next week. At least noone came up with an absolutely blocking problem
with this approach ;).

Greetings,
    Alexander

> 	Ingo
-- 
  Alexander van Heukelum
  heukelum@fastmail.fm

-- 
http://www.fastmail.fm - IMAP accessible web-mail


  reply	other threads:[~2008-11-04 16:45 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-04 12:28 [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes Alexander van Heukelum
2008-11-04 12:42 ` Ingo Molnar
2008-11-04 13:29   ` Alexander van Heukelum
2008-11-04 14:00     ` Ingo Molnar
2008-11-04 16:23       ` Alexander van Heukelum
2008-11-04 16:47         ` Cyrill Gorcunov
2008-11-04 16:58           ` Ingo Molnar
2008-11-04 17:13             ` Cyrill Gorcunov
2008-11-04 17:29               ` Alexander van Heukelum
2008-11-06  9:19                 ` Ingo Molnar
2008-11-04 20:02       ` Jeremy Fitzhardinge
2008-11-04 20:15         ` H. Peter Anvin
2008-11-04 20:02   ` Jeremy Fitzhardinge
2008-11-04 15:07 ` Cyrill Gorcunov
2008-11-04 15:47   ` Alexander van Heukelum
2008-11-04 16:36     ` Ingo Molnar
2008-11-04 16:45       ` Alexander van Heukelum [this message]
2008-11-04 16:54         ` Ingo Molnar
2008-11-04 16:55           ` Ingo Molnar
2008-11-04 16:58           ` Alexander van Heukelum
2008-11-04 17:39           ` Alexander van Heukelum
2008-11-04 17:05   ` Andi Kleen
2008-11-04 18:06     ` Alexander van Heukelum
2008-11-04 18:14       ` H. Peter Anvin
2008-11-04 18:44         ` Alexander van Heukelum
2008-11-04 19:07           ` H. Peter Anvin
2008-11-04 19:33           ` H. Peter Anvin
2008-11-04 20:06             ` Jeremy Fitzhardinge
2008-11-04 20:30             ` Andi Kleen
2008-11-04 20:26               ` H. Peter Anvin
2008-11-04 20:46                 ` Andi Kleen
2008-11-04 20:44       ` Ingo Molnar
2008-11-04 21:06         ` Andi Kleen
2008-11-05  0:42           ` Jeremy Fitzhardinge
2008-11-05  0:50             ` H. Peter Anvin
2008-11-06  9:15             ` Ingo Molnar
2008-11-06  9:25               ` H. Peter Anvin
2008-11-06  9:30                 ` Ingo Molnar
2008-11-05 10:26           ` Ingo Molnar
2008-11-14  1:11             ` Nick Piggin
2008-11-14  1:20               ` H. Peter Anvin
2008-11-14  2:12                 ` Nick Piggin
2008-11-04 21:29         ` Ingo Molnar
2008-11-04 21:35           ` H. Peter Anvin
2008-11-04 21:52             ` Ingo Molnar
2008-11-05 17:53               ` Cyrill Gorcunov
2008-11-05 18:04                 ` H. Peter Anvin
2008-11-05 18:14                   ` Cyrill Gorcunov
2008-11-05 18:20                     ` H. Peter Anvin
2008-11-05 18:26                       ` Cyrill Gorcunov
     [not found]         ` <1226243805.27361.1283784629@webmail.messagingengine.com>
2008-11-10  1:29           ` H. Peter Anvin
2008-11-26 21:35             ` [Lguest] " Avi Kivity
2008-11-26 21:50               ` Avi Kivity
2008-11-27  0:03               ` H. Peter Anvin
2008-11-27 10:13                 ` Avi Kivity
2008-11-27 10:56                   ` Andi Kleen
2008-11-27 10:59                     ` Avi Kivity
2008-11-28 20:48                   ` Alexander van Heukelum
2008-11-29 15:45                     ` Alexander van Heukelum
2008-11-29 18:21                       ` Avi Kivity
2008-11-29 18:22                       ` Avi Kivity
2008-11-29 19:58                         ` Ingo Molnar
2008-12-01  4:32                         ` Rusty Russell
2008-12-01  8:00                           ` Ingo Molnar
2008-12-01  9:24                           ` Avi Kivity
2008-12-01 10:32                             ` Cyrill Gorcunov
2008-12-01 10:41                               ` Avi Kivity
2008-12-01 10:49                                 ` Ingo Molnar
2008-11-10  8:58           ` Ingo Molnar
2008-11-10 12:44             ` Alexander van Heukelum
2008-11-10 13:07               ` Ingo Molnar
2008-11-10 21:35                 ` Alexander van Heukelum
2008-11-10 22:21                   ` H. Peter Anvin
2008-11-11  5:00                   ` H. Peter Anvin
2008-11-13 22:23                     ` Matt Mackall
2008-11-14  1:18                       ` H. Peter Anvin
2008-11-14  2:29                         ` Matt Mackall
2008-11-14  3:22                           ` H. Peter Anvin
2008-11-11  9:54                   ` Ingo Molnar
2008-11-10 15:39             ` H. Peter Anvin
2008-11-10 21:44               ` Alexander van Heukelum
2008-11-10 23:34                 ` H. Peter Anvin
2008-11-05 18:15     ` Cyrill Gorcunov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1225817106.2795.1282945873@webmail.messagingengine.com \
    --to=heukelum@fastmail.fm \
    --cc=andi@firstfloor.org \
    --cc=gorcunov@gmail.com \
    --cc=heukelum@mailshack.com \
    --cc=hpa@zytor.com \
    --cc=jeremy@xensource.com \
    --cc=lguest@ozlabs.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=srostedt@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=travis@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.