xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Huang <wei.huang2@amd.com>
To: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: LWP Interrupt Handler
Date: Fri, 30 Mar 2012 15:11:49 -0500	[thread overview]
Message-ID: <4F761385.1020104@amd.com> (raw)
In-Reply-To: <CB9BD0EA.3D1CE%keir@xen.org>

Tested and it worked. Thanks,

-Wei

On 03/30/2012 03:06 PM, Keir Fraser wrote:
> On 30/03/2012 20:44, "Wei Huang"<wei.huang2@amd.com>  wrote:
>
>> Thanks. I created the LWP interrupt patch based on your solution. It is
>> attached in this email. This patch was tested using Hans Rosenfeld's LWP
>> tree (http://www.amd64.org/gitweb/?p=linux/lwp.git;a=summary).
> Thanks, I've simplified the patch a bit and applied it as
> xen-unstable:25115. Please check it out.
>
>   K.
>
>> =================
>> AMD_LWP: add interrupt support for AMD LWP
>>
>> This patch adds interrupt support for AMD lightweight profiling. It
>> registers interrupt handler using alloc_direct_apic_vector(). When
>> notified, SVM reinjects virtual interrupts into guest VM using
>> guest's virtual local APIC.
>>
>> Signed-off-by: Wei Huang<wei.huang2@amd.com>
>> =================
>>
>> -Wei
>>
>> On 03/30/2012 03:21 AM, Keir Fraser wrote:
>>> On 29/03/2012 23:54, "Wei Huang"<wei.huang2@amd.com>   wrote:
>>>
>>>> How about the approach in attached patches? A interrupt handler is
>>>> registered at vector 0xf6 as a delegate for guest VM. Guest VCPU can
>>>> register interrupt handler to receive notification. I will prepare
>>>> formal ones after receiving your comments.
>>> Ugh, no, these ad-hoc dynamically-registered subhandlers are getting beyond
>>> the pale. Therefore, I've just spent half hour routing all our interrupts,
>>> even the direct APIC ones, through do_IRQ(), and now allow everything to be
>>> dynamically allocated.
>>>
>>> Please pull up to xen-unstable tip (>= 25113) and use
>>> alloc_direct_apic_vector(). You can find a few callers of it that I added
>>> myself, to give you an idea how to use it. In essence, you can call it from
>>> your SVM-specific code, to allocate a vector for your irq handler. And you
>>> can do it immediately before you poke the vector into your special MSR. You
>>> can call alloc_direct_apic_vector() unconditionally -- it will deal with
>>> ensuring the allocation happens only once.
>>>
>>>    -- Keir
>>>
>>>> Thanks,
>>>> -Wei
>>>>
>>>>
>>>> On 03/25/2012 07:08 PM, Zhang, Xiantao wrote:
>>>>> Please make sure the per-cpu vector is considered in your case.  For CPU's
>>>>> built-in event, it always happens on all cpus,  but IRQ-based events may
>>>>> only
>>>>> happen on some special cpus which are determined by apic's mode.
>>>>> Xiantao
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
>>>>>> bounces@lists.xen.org] On Behalf Of Keir Fraser
>>>>>> Sent: Saturday, March 24, 2012 6:50 AM
>>>>>> To: wei.huang2@amd.com; xen-devel@lists.xen.org; Jan Beulich
>>>>>> Subject: Re: [Xen-devel] LWP Interrupt Handler
>>>>>>
>>>>>> On 23/03/2012 22:03, "Wei Huang"<wei.huang2@amd.com>    wrote:
>>>>>>
>>>>>>> I am adding interrupt support for LWP, whose spec is available at
>>>>>>> http://support.amd.com/us/Processor_TechDocs/43724.pdf. Basically OS
>>>>>>> can specify an interrupt vector in LWP_CFG MSR; the interrupt will be
>>>>>>> triggered when event buffer overflows. For HVM guests, I want to
>>>>>>> re-inject this interrupt back into the guest VM. Here is one idea
>>>>>>> similar to virtualized PMU: It first registers a special interrupt
>>>>>>> handler (say on vector 0xf6) using set_intr_gate(). When triggered,
>>>>>>> this handler injects an IRQ (with vector copied from LWP_CFG) into
>>>>>>> guest VM via virtual local APIC. This worked from my test.
>>>>>>>
>>>>>>> But adding a interrupt handler seems to be an overkill. Is there any
>>>>>>> better way to create a dummy interrupt receiver on be-behalf of guest
>>>>>>> VMs? I also looked into IRQ and MSI solutions inside Xen. But most of
>>>>>>> them assume that interrupts are from physical device (but not in this
>>>>>>> LWP case, where interrupt is initiated from CPU itself); so they don't
>>>>>>> fit very well.
>>>>>> I think just allocating a vector is fine. If we get too many we could move
>>>>>> to
>>>>>> dynamic allocation of them.
>>>>>>
>>>>>>     -- Keir
>>>>>>
>>>>>>> Thanks,
>>>>>>> -Wei
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> Xen-devel mailing list
>>>>>> Xen-devel@lists.xen.org
>>>>>> http://lists.xen.org/xen-devel
>>>
>
>

      reply	other threads:[~2012-03-30 20:11 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-23 22:03 LWP Interrupt Handler Wei Huang
2012-03-23 22:49 ` Keir Fraser
2012-03-26  0:08   ` Zhang, Xiantao
2012-03-29 22:54     ` Wei Huang
2012-03-30  8:21       ` Keir Fraser
2012-03-30 19:44         ` Wei Huang
2012-03-30 20:06           ` Keir Fraser
2012-03-30 20:11             ` Wei Huang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F761385.1020104@amd.com \
    --to=wei.huang2@amd.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=xiantao.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).