xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.wang2@amd.com>
To: Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	JBeulich@suse.com, keir@xen.org
Cc: xen-devel@lists.xensource.com
Subject: [PATCH 0 of 6 V5] amd iommu: support ats/gpgpu passthru on iommuv2 systems
Date: Fri, 10 Feb 2012 16:07:05 +0100	[thread overview]
Message-ID: <patchbomb.1328886425@gran.amd.com> (raw)

Hi,
This is patch set v5. It includes all pending patches that are needed to enable gpgpu passthrough and heterogeneous computing in guest OSes. Basically, this patch set gives guest VM the same capability of running openCL applications on amd platforms as native OSes. Upstream Linux 3.3 rc2 with amd iommuv2 kernel driver has been tested well as guest OS, and since last submission, lots of regression tests have been done to make sure this does not break non-iommuv2 systems. Please review it, feedbacks are appreciated.

Many thanks,
Wei

For more details, please refer to old thread.
http://lists.xen.org/archives/html/xen-devel/2012-01/msg01646.html

and, for an overview of the design, please refer to
http://www.amd64.org/pub/iommuv2.png

======================================================================
changes in v5:
* Remove patch 2 after upstream c/s 24729:6f6a6d1d2fb6

changes in v4:
* Only tool part in this version, since hypervisor patches have already been committed.
* rename guest config option from "iommu = {0,1}" to "guest_iommu = {0,1}"
* add description into docs/man/xl.cfg.pod.5


changes in v3:
* Use xenstore to receive guest iommu configuration instead of adding in a new field in hvm_info_table.
* Support pci segment in vbdf to mbdf bind.
* Make hypercalls visible for non-x86 platforms.
* A few code cleanups according to comments from Jan and Ian.

Changes in v2:
* Do not use linked list to access guest iommu tables.
* Do not parse iommu parameter in libxl_device_model_info again.
* Fix incorrect logical calculation in patch 11.
* Fix hypercall definition for non-x86 systems. 

             reply	other threads:[~2012-02-10 15:07 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-10 15:07 Wei Wang [this message]
2012-02-10 15:07 ` [PATCH 1 of 6 V5] amd iommu: Add 2 hypercalls for libxc Wei Wang
2012-02-10 15:29   ` Jan Beulich
2012-02-10 15:42     ` Wei Wang
2012-02-10 15:07 ` [PATCH 2 of 6 V5] amd iommu: Add a hypercall for hvmloader Wei Wang
2012-02-10 15:07 ` [PATCH 3 of 6 V5] hvmloader: Build IVRS table Wei Wang
2012-02-10 15:07 ` [PATCH 4 of 6 V5] libxc: add wrappers for new hypercalls Wei Wang
2012-02-10 15:07 ` [PATCH 5 of 6 V5] libxl: bind virtual bdf to physical bdf after device assignment Wei Wang
2012-02-10 15:07 ` [PATCH 6 of 6 V5] libxl: Introduce a new guest config file parameter Wei Wang
2012-02-13 16:54 ` [PATCH 0 of 6 V5] amd iommu: support ats/gpgpu passthru on iommuv2 systems Ian Jackson
2012-02-15  9:49   ` Wei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=patchbomb.1328886425@gran.amd.com \
    --to=wei.wang2@amd.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).