From: Wei Wang <wei.wang2@amd.com>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
"Tim (Xen.org)" <tim@xen.org>,
"JBeulich@suse.com" <JBeulich@suse.com>,
"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [PATCH] Dump IOMMU p2m table
Date: Fri, 17 Aug 2012 16:43:34 +0200 [thread overview]
Message-ID: <502E5896.30606@amd.com> (raw)
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
On 08/17/2012 04:31 PM, Santosh Jodh wrote:
>
>
>> -----Original Message-----
>> From: Wei Wang [mailto:wei.wang2@amd.com]
>> Sent: Friday, August 17, 2012 4:15 AM
>> To: Santosh Jodh
>> Cc: xen-devel@lists.xensource.com; xiantao.zhang@intel.com; Tim
>> (Xen.org); JBeulich@suse.com
>> Subject: Re: [PATCH] Dump IOMMU p2m table
>>
>> On 08/16/2012 06:36 PM, Santosh Jodh wrote:
>>> New key handler 'o' to dump the IOMMU p2m table for each domain.
>>> Skips dumping table for domain0.
>>> Intel and AMD specific iommu_ops handler for dumping p2m table.
>>>
>>> Incorporated feedback from Jan Beulich and Wei Wang.
>>> Fixed indent printing with %*s.
>>> Removed superflous superpage and other attribute prints.
>>> Make next_level use consistent for AMD IOMMU dumps. Warn if found
>> inconsistent.
>>>
>>> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>>>
>>> diff -r 6d56e31fe1e1 -r 575a53faf4e1
>> xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c Wed Aug 15
>> 09:41:21 2012 +0100
>>> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c Thu Aug 16
>> 09:28:24 2012 -0700
>>> @@ -22,6 +22,7 @@
>>> #include<xen/pci.h>
>>> #include<xen/pci_regs.h>
>>> #include<xen/paging.h>
>>> +#include<xen/softirq.h>
>>> #include<asm/hvm/iommu.h>
>>> #include<asm/amd-iommu.h>
>>> #include<asm/hvm/svm/amd-iommu-proto.h>
>>> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>>>
>>> #include<asm/io_apic.h>
>>>
>>> +static void amd_dump_p2m_table_level(struct page_info* pg, int
>> level,
>>> + paddr_t gpa, int indent) {
>>> + paddr_t address;
>>> + void *table_vaddr, *pde;
>>> + paddr_t next_table_maddr;
>>> + int index, next_level, present;
>>> + u32 *entry;
>>> +
>>> + if ( level< 1 )
>>> + return;
>>> +
>>> + table_vaddr = __map_domain_page(pg);
>>> + if ( table_vaddr == NULL )
>>> + {
>>> + printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
>>> + page_to_maddr(pg));
>>> + return;
>>> + }
>>> +
>>> + for ( index = 0; index< PTE_PER_TABLE_SIZE; index++ )
>>> + {
>>> + if ( !(index % 2) )
>>> + process_pending_softirqs();
>>> +
>>> + pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
>>> + next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
>>> + entry = (u32*)pde;
>>> +
>>> + present = get_field_from_reg_u32(entry[0],
>>> + IOMMU_PDE_PRESENT_MASK,
>>> + IOMMU_PDE_PRESENT_SHIFT);
>>> +
>>> + if ( !present )
>>> + continue;
>>> +
>>> + next_level = get_field_from_reg_u32(entry[0],
>>> +
>> IOMMU_PDE_NEXT_LEVEL_MASK,
>>> +
>>> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
>>> +
>>> + if ( next_level != (level - 1) )
>>> + {
>>> + printk("IOMMU p2m table error. next_level = %d, expected
>> %d\n",
>>> + next_level, level - 1);
>>> +
>>> + continue;
>>> + }
>>
>> HI,
>>
>> This check is not proper for 2MB and 1GB pages. For example, if a guest
>> 4 level page tables, for a 2MB entry, the next_level field will be 3
>> ->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
>> have next_level = 0. I saw following output for those pages:
>>
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>>
>> Thanks,
>> Wei
>
> How about changing the check to:
> if ( next_level&& (next_level != (level - 1)) )
That should be good, since we don't have skip levels.
Thanks,
Wei
>
> Thanks,
> Santosh
>
next prev parent reply other threads:[~2012-08-17 14:43 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-16 16:36 [PATCH] Dump IOMMU p2m table Santosh Jodh
2012-08-17 9:50 ` Jan Beulich
2012-08-17 11:14 ` Wei Wang
2012-08-17 14:31 ` Santosh Jodh
2012-08-17 14:43 ` Wei Wang [this message]
-- strict thread matches above, loose matches on Subject: below --
2012-08-17 14:57 Santosh Jodh
2012-08-21 10:04 ` Wei Wang
2012-08-17 14:53 Santosh Jodh
2012-08-14 19:55 Santosh Jodh
2012-08-15 8:54 ` Jan Beulich
2012-08-15 10:39 ` Wei Wang
2012-08-16 16:27 ` Santosh Jodh
2012-08-17 9:21 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=502E5896.30606@amd.com \
--to=wei.wang2@amd.com \
--cc=JBeulich@suse.com \
--cc=Santosh.Jodh@citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
--cc=xiantao.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).