From: Wei Wang <wei.wang2@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: SherryHurwitz <sherry.hurwitz@amd.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Jeremy Fitzhardinge <jeremy@goop.org>,
stable@kernel.org,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
KonradRzeszutek Wilk <konrad.wilk@oracle.com>,
linux-pci@vger.kernel.org,
Jesse Barnes <jbarnes@virtuousgeek.org>,
ebiederm@xmission.com
Subject: Re: [PATCH] PCI/MSI: don't disable AMD IOMMU MSI on Xen dom0
Date: Thu, 21 Jun 2012 14:28:53 +0200 [thread overview]
Message-ID: <4FE31385.3060502@amd.com> (raw)
In-Reply-To: <4FE32A81020000780008B11B@nat28.tlf.novell.com>
On 06/21/2012 02:06 PM, Jan Beulich wrote:
>>>> On 21.06.12 at 13:21, Wei Wang<wei.wang2@amd.com> wrote:
>> On 06/21/2012 11:59 AM, Jan Beulich wrote:
>>>>>> On 14.06.12 at 17:15, Wei Wang<wei.wang2@amd.com> wrote:
>>>> Am 14.06.2012 16:18, schrieb Jan Beulich:
>>>>> Have you at all considered getting this fixed on the kernel side?
>>>>> As I don't have direct access to any AMD IOMMU capable
>>>>> system - can one, other than by enumerating the respective
>>>>> PCI IDs or reading ACPI tables, reasonably easily identify the
>>>>> devices in question (e.g. via vendor/class/sub-class or some
>>>>> such)? That might permit skipping those in the offending kernel
>>>>> code...
>>>>
>>>> AMD IOMMUs (both v1 and v2) uses class id 08 (System Base Peripheral)
>>>> and sub class id 06 (IOMMU). Combined with PCI_VENDEOR_ID_AMD, this
>>>> should be enough to identify amd iommu device. I could send you a kernel
>>>> patch for review using this approach. I would believe that fixing this
>>>> issue in 4.2, no matter how, is really important for amd iommu.
>>>
>>> As you didn't come forward with anything, here's my first
>>> take on this:
>>
>> Hi Jan
>> Thanks a lot for the patch. Actually I plan to send my version today,
>> which is based on 3.4 pv_ops but looks very similar to yours. So, Acked!
>>
>> I also evaluated the possibility of hiding iommu device from dom0. I
>> think the change is no quite a lot, at least, for io based pcicfg
>> access. A proof-of-concept patch is attached.
>
> This completely hides the device from Dom0, but only when
> config space is accessed via method 1. Did you not see my
> earlier patch doing this for MCFG as well
Could you please provide a particular c/s number?... (I saw too many c/s
might be related to this topic). so that I could work out a patch to
support both i/o and mmcfg.
(albeit only disallowing
> writes, so allowing the device to still be seen by Dom0)?
Sounds better to me...this still allows user to check iommu status from
lspci.
> Whether completely hiding the device is actually okay I can't
> easily tell: Would IOMMUs always be either at func 0 of a single-
> unction device, or at a non-zero func of a multi-function one? If
> not, other devices may get hidden implicitly.
AMD IOMMU is an independent pci-e endpoint, and this function will not
be used for other purposes other than containing an iommu. So I don't
see that iommu will share bdf value with other devices.
Thanks,
Wei
> Also I noticed just now that guest_io_read() wouldn't really
> behave correctly when pci_cfg_ok() returned false - it might
> pass back 0xff even for a multi-byte read. I'll send a fix shortly.
>
> Jan
>
>> --- a/xen/arch/x86/traps.c Thu Jun 21 11:30:59 2012 +0200
>> +++ b/xen/arch/x86/traps.c Thu Jun 21 13:19:02 2012 +0200
>> @@ -73,6 +73,7 @@
>> #include<asm/hpet.h>
>> #include<public/arch-x86/cpuid.h>
>> #include<xsm/xsm.h>
>> +#include<asm/hvm/svm/amd-iommu-proto.h>
>>
>> /*
>> * opt_nmi: one of 'ignore', 'dom0', or 'fatal'.
>> @@ -1686,10 +1687,19 @@ static int pci_cfg_ok(struct domain *d,
>> {
>> uint32_t machine_bdf;
>> uint16_t start, end;
>> + struct amd_iommu *iommu;
>> +
>> if (!IS_PRIV(d))
>> return 0;
>>
>> machine_bdf = (d->arch.pci_cf8>> 8)& 0xFFFF;
>> +
>> + for_each_amd_iommu ( iommu )
>> + {
>> + if ( machine_bdf == iommu->bdf )
>> + return 0;
>> + }
>> +
>> start = d->arch.pci_cf8& 0xFF;
>> end = start + size - 1;
>> if (xsm_pci_config_permission(d, machine_bdf, start, end, write))
>
>
>
next prev parent reply other threads:[~2012-06-21 12:28 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-12 12:02 [PATCH V2] amd iommu: re-enable iommu msi if dom0 disabled it Wei Wang
2012-06-12 15:13 ` Jan Beulich
2012-06-12 16:08 ` Andrew Cooper
2012-06-12 16:43 ` Jan Beulich
2012-06-14 12:13 ` Wei Wang
2012-06-14 14:18 ` Jan Beulich
2012-06-14 15:15 ` Wei Wang
2012-06-14 15:27 ` Jan Beulich
2012-06-21 9:59 ` [PATCH] PCI/MSI: don't disable AMD IOMMU MSI on Xen dom0 (was: Re: [PATCH V2] amd iommu: re-enable iommu msi if dom0 disabled it) Jan Beulich
2012-06-21 11:08 ` [PATCH] PCI/MSI: don't disable AMD IOMMU MSI on Xen dom0 Eric W. Biederman
2012-06-21 12:28 ` Jan Beulich
2012-06-21 11:21 ` Wei Wang
2012-06-21 12:06 ` Jan Beulich
2012-06-21 12:28 ` Wei Wang [this message]
2012-06-21 12:45 ` Jan Beulich
2012-06-21 13:10 ` Wei Wang
2012-06-21 13:24 ` Jan Beulich
2012-06-21 13:27 ` Wei Wang
2012-06-20 15:45 ` [PATCH V2] amd iommu: re-enable iommu msi if dom0 disabled it Jan Beulich
2012-06-21 15:29 ` Wei Wang
2012-06-21 15:49 ` Jan Beulich
2012-06-21 16:31 ` Keir Fraser
2012-06-22 9:03 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FE31385.3060502@amd.com \
--to=wei.wang2@amd.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=ebiederm@xmission.com \
--cc=jbarnes@virtuousgeek.org \
--cc=jeremy@goop.org \
--cc=konrad.wilk@oracle.com \
--cc=linux-pci@vger.kernel.org \
--cc=sherry.hurwitz@amd.com \
--cc=stable@kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).