xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Sheng Yang <sheng@linux.intel.com>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Don Dutile <ddutile@redhat.com>,
	Keir Fraser <keir.fraser@eu.citrix.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: MSI proposal and work transfer...(was: Re: [PATCH 0 of 5] PV on HVM Xen)
Date: Wed, 24 Mar 2010 16:19:09 +0800	[thread overview]
Message-ID: <201003241619.09964.sheng@linux.intel.com> (raw)
In-Reply-To: <4BA928FE.9000200@goop.org>

On Wednesday 24 March 2010 04:47:58 Jeremy Fitzhardinge wrote:
> On 03/21/2010 11:26 PM, Sheng Yang wrote:
> > On Saturday 20 March 2010 04:38:23 Jeremy Fitzhardinge wrote:
> >> On 03/17/2010 06:30 PM, Sheng Yang wrote:
> >>>> Xiantao has some interesting ideas for this.
> >>>
> >>> Xiantao and I have discussed on this for a month... Basically we have
> >>> got two approaches now, but we can't reach an agreement... I would work
> >>> on it after current hybrid thing settled down. Of course, we want MSI
> >>> support benefit pv_ops dom0 as well as hybrid.
> >>
> >> Xiantao's proposal of a new top-level MSI API for the kernel looks
> >> pretty clean, and I think it has a reasonable chance of being accepted
> >> upstream.
> >>
> >> What's your proposal?
> >
> > My proposal is to do these in the lower level compared to Xiantao's
> > proposal, because I don't think touch PCI subsystem is a good idea for
> > upstream check in.
> >
> > We can take advantage of the fact that MSI data/address formating can be
> > defined by each architecture, and at the same time, trap the accessing in
> > the Xen, passthrough the most PCI configuration space accessing but
> > intercepted MSI data/address accessing, so that we can write the real
> > data to the hardware when guest try to write Xen specific MSI
> > data/address format.
> >
> > The hook position would be arch_setup_msi_irqs(), which would create the
> > vector and write the x86 LAPIC specific format to MSI data/address. By
> > this way, we can limit the impact inside x86 arch. We would write the
> > information contained evtchn/PIRQ in it, so that we can setup the
> > mapping. And this same point works for MSI and MSI-X, and S3 wouldn't be
> > a issue if we trap the accessing.
> 
> I would be interested in seeing what the patches look like for this.
> 
> But to be quite honest, it could well be easier to introduce a new nice,
> clean, self-contained and consistent API at the appropriate level of
> abstraction rather than trying to shoe-horn one into the arch/x86
> layer.  It sounds like your proposal may well save some general kernel
> code changes, but at the expense of being quite complex under the covers.

I think the key for checking in is small footprint and only necessary changes 
allowed. PCI spec is there, define what's is MSI and MSI-X, and how should we 
deal with it. MSI hook is easy for Xen, but not easy for Linux upstream I 
think. 

Anyway, it's up to you...

-- 
regards
Yang, Sheng
 
> > Another thing is, due to some other task assignment to me days ago, I am
> > afraid I have to stop my working on PV extension of HVM guest, as well as
> > MSI work which we considered as a part of PV interrupt delivery mechanism
> > for Hybrid. You know, it's really a hard decision to me, but I have no
> > choice...
> >
> > So I would like to transfer the current work to someone who interested in
> > it. The next step is somehow clear. We would have a PV clocksource for
> > HVM, as well as PIRQ mapped irqchip to speed up interrupt delivery.
> >
> > Stefano, would you like help to take my work and continue it? I think no
> > one is more familiar with these discussion and code than you in the
> > community. The final target is still upstream Linux I hope...
> 
> That's unfortunate; things seem to have been progressing quite well, and
> I'd really like to get something ready to commit (and possibly upstream)
> soon.  Stefano, will you be able to finish things off?
> 
> Thanks,
>      J
> 

  reply	other threads:[~2010-03-24  8:19 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-10 15:46 [PATCH 0 of 5] PV on HVM Xen Stefano Stabellini
2010-03-10 17:33 ` [Xen-devel] " Pasi Kärkkäinen
2010-03-10 17:55   ` Stefano Stabellini
2010-03-10 19:45     ` Jeremy Fitzhardinge
2010-03-12  3:23 ` Sheng Yang
2010-03-12 10:42   ` [Xen-devel] " Stefano Stabellini
2010-03-12 16:00     ` Stefano Stabellini
2010-03-15  4:05       ` Sheng Yang
2010-03-15  8:29         ` Sheng Yang
2010-03-15 12:22           ` Stefano Stabellini
2010-03-17  9:38             ` Sheng Yang
2010-03-17 14:18               ` Konrad Rzeszutek Wilk
2010-03-17 15:21               ` Stefano Stabellini
2010-03-17 16:13                 ` Jeremy Fitzhardinge
2010-03-18  1:30                   ` Sheng Yang
2010-03-19 20:38                     ` Jeremy Fitzhardinge
2010-03-22  6:26                       ` MSI proposal and work transfer...(was: Re: [PATCH 0 of 5] PV on HVM Xen) Sheng Yang
2010-03-23 20:47                         ` Jeremy Fitzhardinge
2010-03-24  8:19                           ` Sheng Yang [this message]
2010-03-23 23:16                         ` Stefano Stabellini
2010-03-24  8:25                           ` Sheng Yang
2010-03-18  2:19                 ` [PATCH 0 of 5] PV on HVM Xen Sheng Yang
2010-03-18 16:42                   ` Jeremy Fitzhardinge
2010-03-17 16:13               ` Jeremy Fitzhardinge
2010-03-15 12:28         ` Stefano Stabellini
2010-03-15 23:08           ` Jeremy Fitzhardinge
2010-03-15 23:24             ` Frank van der Linden
2010-03-16  0:32             ` Dan Magenheimer
2010-03-16  6:09               ` Sheng Yang
2010-03-16 16:46                 ` Dan Magenheimer
2010-03-16 11:07             ` Stefano Stabellini
2010-03-16 17:23               ` Jeremy Fitzhardinge
2010-03-16 17:32                 ` Stefano Stabellini
2010-03-16 17:41                   ` Jeremy Fitzhardinge
2010-03-16 18:06                     ` Stefano Stabellini
2010-03-16 18:26                       ` Jeremy Fitzhardinge
2010-03-16 18:37                         ` Stefano Stabellini
2010-03-17  8:51                           ` Sheng Yang
2010-03-17  9:18                             ` Sheng Yang
2010-03-17 15:17                               ` Stefano Stabellini
2010-03-17 18:20                                 ` Ian Campbell
2010-03-18  1:42                                   ` Sheng Yang
2010-03-18  1:35                                 ` Sheng Yang
2010-03-18 14:22                                   ` Stefano Stabellini
2010-03-18 16:50                                     ` Jeremy Fitzhardinge
2010-03-18 17:30                                 ` Jeremy Fitzhardinge
2010-03-12 21:53     ` [Xen-devel] " Jeremy Fitzhardinge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201003241619.09964.sheng@linux.intel.com \
    --to=sheng@linux.intel.com \
    --cc=ddutile@redhat.com \
    --cc=jeremy@goop.org \
    --cc=keir.fraser@eu.citrix.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    --cc=xiantao.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).