* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-04 10:38 ` [PATCH 4/6] mm: Add new map space for add_to_physmap, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-04 10:38 ` Jean Guyader
2011-11-04 14:52 ` Keir Fraser
0 siblings, 1 reply; 34+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1576 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..fb451ef 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -185,17 +185,25 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ if ((pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT);
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.size = size;
+ xatp.idx = (pci_mem_start >> PAGE_SHIFT);
+ xatp.gpfn = hvm_info->high_mem_pgend;
+
+ printf("Relocate page for pci hole:\n");
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ printf(" - low_mem_pgend: %x -> %lx\n"
+ " - high_mem_pgend: %x -> %x\n",
+ hvm_info->low_mem_pgend, (pci_mem_start >> PAGE_SHIFT),
+ hvm_info->high_mem_pgend, hvm_info->high_mem_pgend + size);
+ hvm_info->low_mem_pgend = pci_mem_start >> PAGE_SHIFT;
+ hvm_info->high_mem_pgend += size;
}
mem_resource.base = pci_mem_start;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-04 10:38 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
@ 2011-11-04 14:52 ` Keir Fraser
0 siblings, 0 replies; 34+ messages in thread
From: Keir Fraser @ 2011-11-04 14:52 UTC (permalink / raw)
To: Jean Guyader, xen-devel; +Cc: tim, allen.m.kay
On 04/11/2011 10:38, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote:
>
> Change the way we relocate the memory page if they overlap with pci hole.
> Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
>
> This code usually get triggered when a device is pass through to a guest
> and the PCI hole has to be extended to have enough room to map the device
> BARs.
> The PCI hole will starts lower and it might overlap with some RAM that has
> been
> alocated for the guest. That usually happen if the guest has more than 4G of
> RAM.
> We have to relocate those pages in high mem otherwise they won't be
> accessible.
If the size field in the add_to_physmap structure is reduced to 16 bits, the
new if() stmt will probably have to become a while() again.
The new printf's are just excess verbiage. We dump the memory map as we exit
hvmloader anyway.
-- Keir
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
> tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
> 1 files changed, 14 insertions(+), 6 deletions(-)
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-07 15:16 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
@ 2011-11-07 15:16 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1719 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..3bd6ac5 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_start_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,17 +186,24 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_start_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_start_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_start_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_start_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_start_pg += size;
+ hvm_info->high_mem_pgend += size;
+ hvm_info->low_mem_pgend = pci_mem_start_pg;
}
mem_resource.base = pci_mem_start;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-07 18:25 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
@ 2011-11-07 18:25 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1719 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..3bd6ac5 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_start_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,17 +186,24 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_start_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_start_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_start_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_start_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_start_pg += size;
+ hvm_info->high_mem_pgend += size;
+ hvm_info->low_mem_pgend = pci_mem_start_pg;
}
mem_resource.base = pci_mem_start;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-08 20:04 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-08 20:04 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1719 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..3bd6ac5 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_start_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,17 +186,24 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_start_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_start_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_start_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_start_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_start_pg += size;
+ hvm_info->high_mem_pgend += size;
+ hvm_info->low_mem_pgend = pci_mem_start_pg;
}
mem_resource.base = pci_mem_start;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
2011-11-10 8:44 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-10 8:44 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-10 8:44 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1719 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..3bd6ac5 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_start_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,17 +186,24 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_start_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_start_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_start_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_start_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_start_pg += size;
+ hvm_info->high_mem_pgend += size;
+ hvm_info->low_mem_pgend = pci_mem_start_pg;
}
mem_resource.base = pci_mem_start;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole
2011-11-13 17:40 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-13 17:40 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1763 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..62aeff2 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_reloc_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,18 +186,25 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_reloc_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_reloc_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_reloc_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_reloc_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_reloc_pg += size;
+ hvm_info->high_mem_pgend += size;
}
+ hvm_info->low_mem_pgend = pci_mem_start >> PAGE_SHIFT;
mem_resource.base = pci_mem_start;
mem_resource.max = pci_mem_end;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8)
@ 2011-11-16 19:25 Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-17 9:20 ` [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Keir Fraser
0 siblings, 2 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 2807 bytes --]
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v7 and v8:
- Rebase on new add_to_physmap from Andres Lagar-Cavilla
- Only do copy_to_guest on the new XENMAPSPACE (XENMAPSPACE_gmfn_range).
- Convert the xatp argument to a pointer in xenmem_add_to_physmap.
- Modify xenmem_add_to_physmap so it doesn't touchs the input (xatp is const now).
Changes between v6 and v7:
- xenmem_add_to_physmap_once can't take a pointer on xatp because
it modifies .idx and .gpfn.
- Cancel hypercall continuation in the compat layer if the copy_to_guest
failed.
Changes between v5 and v6:
- Rework the patch queue to make it more readable.
- Modify xatp in place in xenmem_add_to_physmap
- Only check for preemption if we are not at the last iteration
- Copy xatp guest handler back to the guest only in case of continuation
- Add continuation only when dealing with the new xenmem space
(XENMAPSPACE_gmfn_range).
Changes between v4 and v5:
- Fix hypercall continuation for add_to_physmap in compat mode.
Changes between v3 and v4:
- Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap.
- Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb.
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap
mm: New XENMEM space, XENMAPSPACE_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 212 +++++++++++++++++++++++------------
xen/arch/x86/x86_64/compat/mm.c | 18 +++
xen/drivers/passthrough/iommu.c | 25 ++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++-------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 17 +++
7 files changed, 277 insertions(+), 119 deletions(-)
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-17 9:20 ` [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Keir Fraser
1 sibling, 1 reply; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d125b93..97c7ffa 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -579,16 +579,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -614,21 +651,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1678,12 +1701,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1725,26 +1744,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 97c7ffa..f5d48ec 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -620,6 +620,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2331,6 +2341,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-19 21:58 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Olaf Hering
0 siblings, 2 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 292 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 161 ++++++++++++++++++++++++++++------------------------
1 files changed, 87 insertions(+), 74 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7450 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a41a1d6..f093e93 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4677,37 +4677,18 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d,
+ const struct xen_add_to_physmap *xatp)
{
struct page_info *page = NULL;
unsigned long gfn = 0; /* gcc ... */
+ unsigned long prev_mfn, mfn = 0, gpfn, idx;
int rc;
- switch ( op )
+ switch ( xatp->space )
{
- case XENMEM_add_to_physmap:
- {
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
-
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
-
- if ( xsm_add_to_physmap(current->domain, d) )
- {
- rcu_unlock_domain(d);
- return -EPERM;
- }
-
- switch ( xatp.space )
- {
case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
+ if ( xatp->idx == 0 )
mfn = virt_to_mfn(d->shared_info);
break;
case XENMAPSPACE_grant_table:
@@ -4716,21 +4697,22 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
if ( d->grant_table->gt_version == 0 )
d->grant_table->gt_version = 1;
+ idx = xatp->idx;
if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
+ (xatp->idx & XENMAPIDX_grant_table_status) )
{
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp->idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[idx]);
}
else
{
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ if ( (idx >= nr_grant_frames(d->grant_table)) &&
+ (idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, idx + 1);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
+ if ( idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
}
spin_unlock(&d->grant_table->lock);
@@ -4738,9 +4720,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
case XENMAPSPACE_gmfn:
{
p2m_type_t p2mt;
- gfn = xatp.idx;
+ gfn = xatp->idx;
- xatp.idx = mfn_x(get_gfn_unshare(d, xatp.idx, &p2mt));
+ idx = mfn_x(get_gfn_unshare(d, xatp->idx, &p2mt));
/* If the page is still shared, exit early */
if ( p2m_is_shared(p2mt) )
{
@@ -4748,58 +4730,89 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
rcu_unlock_domain(d);
return -ENOMEM;
}
- if ( !get_page_from_pagenr(xatp.idx, d) )
+ if ( !get_page_from_pagenr(idx, d) )
break;
- mfn = xatp.idx;
+ mfn = idx;
page = mfn_to_page(mfn);
break;
}
default:
break;
- }
-
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- if ( xatp.space == XENMAPSPACE_gmfn )
- put_gfn(d, gfn);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
-
- domain_lock(d);
+ }
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
if ( page )
put_page(page);
+ if ( xatp->space == XENMAPSPACE_gmfn )
+ put_gfn(d, gfn);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- /* Remove previously mapped page if it was present. */
- prev_mfn = get_gfn_untyped(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, PAGE_ORDER_4K);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
- put_gfn(d, xatp.gpfn);
+ domain_lock(d);
+
+ if ( page )
+ put_page(page);
+
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = get_gfn_untyped(d, xatp->gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp->gpfn, prev_mfn, PAGE_ORDER_4K);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp->gpfn);
+ }
+ /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
+ put_gfn(d, xatp->gpfn);
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, PAGE_ORDER_4K);
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp->gpfn, mfn, PAGE_ORDER_4K);
- /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
- if ( xatp.space == XENMAPSPACE_gmfn )
- put_gfn(d, gfn);
- domain_unlock(d);
+ /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
+ if ( xatp->space == XENMAPSPACE_gmfn )
+ put_gfn(d, gfn);
+ domain_unlock(d);
+
+ rcu_unlock_domain(d);
+
+ return rc;
+}
+
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
+
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
+
+ rc = xenmem_add_to_physmap(d, &xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
2011-11-19 21:58 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Olaf Hering
1 sibling, 2 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 571 bytes --]
XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
a range of pages. The size of the range is defined in a new field.
This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 55 ++++++++++++++++++++++++++++++++++++---
xen/arch/x86/x86_64/compat/mm.c | 15 ++++++++++
xen/include/public/memory.h | 4 +++
3 files changed, 70 insertions(+), 4 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0004-mm-New-XENMEM-space-XENMAPSPACE_gmfn_range.patch --]
[-- Type: text/x-patch; name="0004-mm-New-XENMEM-space-XENMAPSPACE_gmfn_range.patch", Size: 4635 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f093e93..44a444e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4677,8 +4677,8 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-static int xenmem_add_to_physmap(struct domain *d,
- const struct xen_add_to_physmap *xatp)
+static int xenmem_add_to_physmap_once(struct domain *d,
+ const struct xen_add_to_physmap *xatp)
{
struct page_info *page = NULL;
unsigned long gfn = 0; /* gcc ... */
@@ -4717,6 +4717,7 @@ static int xenmem_add_to_physmap(struct domain *d,
spin_unlock(&d->grant_table->lock);
break;
+ case XENMAPSPACE_gmfn_range:
case XENMAPSPACE_gmfn:
{
p2m_type_t p2mt;
@@ -4744,7 +4745,8 @@ static int xenmem_add_to_physmap(struct domain *d,
{
if ( page )
put_page(page);
- if ( xatp->space == XENMAPSPACE_gmfn )
+ if ( xatp->space == XENMAPSPACE_gmfn ||
+ xatp->space == XENMAPSPACE_gmfn_range )
put_gfn(d, gfn);
rcu_unlock_domain(d);
return -EINVAL;
@@ -4779,7 +4781,8 @@ static int xenmem_add_to_physmap(struct domain *d,
rc = guest_physmap_add_page(d, xatp->gpfn, mfn, PAGE_ORDER_4K);
/* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
- if ( xatp->space == XENMAPSPACE_gmfn )
+ if ( xatp->space == XENMAPSPACE_gmfn ||
+ xatp->space == XENMAPSPACE_gmfn_range )
put_gfn(d, gfn);
domain_unlock(d);
@@ -4788,6 +4791,37 @@ static int xenmem_add_to_physmap(struct domain *d,
return rc;
}
+static int xenmem_add_to_physmap(struct domain *d,
+ struct xen_add_to_physmap *xatp)
+{
+ int rc = 0;
+
+ if ( xatp->space == XENMAPSPACE_gmfn_range )
+ {
+ while ( xatp->size > 0 )
+ {
+ rc = xenmem_add_to_physmap_once(d, xatp);
+ if ( rc < 0 )
+ return rc;
+
+ xatp->idx++;
+ xatp->gpfn++;
+ xatp->size--;
+
+ /* Check for continuation if it's not the last interation */
+ if ( xatp->size > 0 && hypercall_preempt_check() )
+ {
+ rc = -EAGAIN;
+ break;
+ }
+ }
+
+ return rc;
+ }
+
+ return xenmem_add_to_physmap_once(d, xatp);
+}
+
long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
{
int rc;
@@ -4816,6 +4850,19 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
rcu_unlock_domain(d);
+ if ( xatp.space == XENMAPSPACE_gmfn_range )
+ {
+ if ( rc )
+ {
+ if ( copy_to_guest(arg, &xatp, 1) )
+ return -EFAULT;
+ }
+
+ if ( rc == -EAGAIN )
+ rc = hypercall_create_continuation(
+ __HYPERVISOR_memory_op, "ih", op, arg);
+ }
+
return rc;
}
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 3ef08a5..bea94fe 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -64,6 +64,21 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
XLAT_add_to_physmap(nat, &cmp);
rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
+ if ( cmp.space == XENMAPSPACE_gmfn_range )
+ {
+ if ( rc )
+ {
+ XLAT_add_to_physmap(&cmp, nat);
+ if ( copy_to_guest(arg, &cmp, 1) )
+ {
+ hypercall_cancel_continuation();
+ return -EFAULT;
+ }
+ }
+ if ( rc == __HYPERVISOR_memory_op )
+ hypercall_xlat_continuation(NULL, 0x2, nat, arg);
+ }
+
break;
}
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 08355e3..c5b78a8 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,10 +208,14 @@ struct xen_add_to_physmap {
/* Which domain to change the mapping for. */
domid_t domid;
+ /* Number of pages to go through for gmfn_range */
+ uint16_t size;
+
/* Source mapping space. */
#define XENMAPSPACE_shared_info 0 /* shared info page */
#define XENMAPSPACE_grant_table 1 /* grant table page */
#define XENMAPSPACE_gmfn 2 /* GMFN */
+#define XENMAPSPACE_gmfn_range 3 /* GMFN range */
unsigned int space;
#define XENMAPIDX_grant_table_status 0x80000000
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
1 sibling, 1 reply; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 715 bytes --]
Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.
This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
tools/firmware/hvmloader/pci.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1763 bytes --]
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..62aeff2 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
uint32_t devfn, bar_reg, bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ unsigned long pci_mem_reloc_pg;
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
@@ -185,18 +186,25 @@ void pci_setup(void)
((pci_mem_start << 1) != 0) )
pci_mem_start <<= 1;
- while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+ /* Relocate RAM that overlaps (in 64K chunks) */
+ pci_mem_reloc_pg = (pci_mem_start >> PAGE_SHIFT);
+ while (pci_mem_reloc_pg < hvm_info->low_mem_pgend)
{
struct xen_add_to_physmap xatp;
- if ( hvm_info->high_mem_pgend == 0 )
- hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+ unsigned int size = hvm_info->low_mem_pgend - pci_mem_reloc_pg;
xatp.domid = DOMID_SELF;
- xatp.space = XENMAPSPACE_gmfn;
- xatp.idx = --hvm_info->low_mem_pgend;
- xatp.gpfn = hvm_info->high_mem_pgend++;
+ xatp.space = XENMAPSPACE_gmfn_range;
+ xatp.idx = pci_mem_reloc_pg;
+ xatp.gpfn = hvm_info->high_mem_pgend;
+ size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+ xatp.size = size;
+
if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
BUG();
+ pci_mem_reloc_pg += size;
+ hvm_info->high_mem_pgend += size;
}
+ hvm_info->low_mem_pgend = pci_mem_start >> PAGE_SHIFT;
mem_resource.base = pci_mem_start;
mem_resource.max = pci_mem_end;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
2011-11-16 19:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
0 siblings, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 461 bytes --]
Add cpu flag that will be checked by the iommu low level code
to skip iotlb flushes. iommu_iotlb_flush shall be called explicitly.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 12 ++++++++++++
xen/drivers/passthrough/iommu.c | 5 +++++
xen/drivers/passthrough/vtd/iommu.c | 6 ++++--
xen/include/xen/iommu.h | 12 ++++++++++++
4 files changed, 33 insertions(+), 2 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch --]
[-- Type: text/x-patch; name="0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch", Size: 3849 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 44a444e..23afda9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4794,10 +4794,15 @@ static int xenmem_add_to_physmap_once(struct domain *d,
static int xenmem_add_to_physmap(struct domain *d,
struct xen_add_to_physmap *xatp)
{
+ struct xen_add_to_physmap start_xatp;
int rc = 0;
if ( xatp->space == XENMAPSPACE_gmfn_range )
{
+ if ( need_iommu(d) )
+ this_cpu(iommu_dont_flush_iotlb) = 1;
+
+ start_xatp = *xatp;
while ( xatp->size > 0 )
{
rc = xenmem_add_to_physmap_once(d, xatp);
@@ -4816,6 +4821,13 @@ static int xenmem_add_to_physmap(struct domain *d,
}
}
+ if ( need_iommu(d) )
+ {
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
+ iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+ }
+
return rc;
}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ca7b37b..bacca11 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -52,6 +52,8 @@ bool_t __read_mostly iommu_hap_pt_share = 1;
bool_t __read_mostly iommu_debug;
bool_t __read_mostly amd_iommu_perdev_intremap;
+DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
static void __init parse_iommu_param(char *s)
{
char *ss;
@@ -227,6 +229,7 @@ static int iommu_populate_page_table(struct domain *d)
spin_lock(&d->page_alloc_lock);
+ this_cpu(iommu_dont_flush_iotlb) = 1;
page_list_for_each ( page, &d->page_list )
{
if ( is_hvm_domain(d) ||
@@ -244,6 +247,8 @@ static int iommu_populate_page_table(struct domain *d)
}
}
}
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ iommu_iotlb_flush_all(d);
spin_unlock(&d->page_alloc_lock);
return 0;
}
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index f5d48ec..d66db18 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -661,7 +661,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
+ if ( !this_cpu(iommu_dont_flush_iotlb) )
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1754,7 +1755,8 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+ if ( !this_cpu(iommu_dont_flush_iotlb) )
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index a1034df..6f7fbf7 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -160,4 +160,16 @@ int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
void iommu_iotlb_flush_all(struct domain *d);
+/*
+ * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
+ * avoid unecessary iotlb_flush in the low level IOMMU code.
+ *
+ * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
+ * this operation can be really expensive. This flag will be set by the
+ * caller to notify the low level IOMMU code to avoid the iotlb flushes.
+ * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
+ * the caller.
+ */
+DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8)
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-17 9:20 ` Keir Fraser
2011-11-17 9:42 ` Jan Beulich
1 sibling, 1 reply; 34+ messages in thread
From: Keir Fraser @ 2011-11-17 9:20 UTC (permalink / raw)
To: Jean Guyader, xen-devel; +Cc: allen.m.kay, tim, JBeulich
On 16/11/2011 19:25, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote:
>
> In one of my previous email I detailed a bug I was seeing when passing
> through a Intel GPU on a guest that has more that 4G or RAM.
>
> Allen suggested that I go for the Plan B but after a discussion with Tim
> we agreed that Plan B was way to disruptive in term of code change.
>
> This patch series implements Plan A.
I'll check these in when they get an Ack from Jan.
-- Keir
> http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td495
> 2866.html
>
> Changes between v7 and v8:
> - Rebase on new add_to_physmap from Andres Lagar-Cavilla
> - Only do copy_to_guest on the new XENMAPSPACE
> (XENMAPSPACE_gmfn_range).
> - Convert the xatp argument to a pointer in xenmem_add_to_physmap.
> - Modify xenmem_add_to_physmap so it doesn't touchs the input (xatp is
> const now).
>
> Changes between v6 and v7:
> - xenmem_add_to_physmap_once can't take a pointer on xatp because
> it modifies .idx and .gpfn.
> - Cancel hypercall continuation in the compat layer if the
> copy_to_guest
> failed.
>
> Changes between v5 and v6:
> - Rework the patch queue to make it more readable.
> - Modify xatp in place in xenmem_add_to_physmap
> - Only check for preemption if we are not at the last iteration
> - Copy xatp guest handler back to the guest only in case of continuation
> - Add continuation only when dealing with the new xenmem space
> (XENMAPSPACE_gmfn_range).
>
> Changes between v4 and v5:
> - Fix hypercall continuation for add_to_physmap in compat mode.
>
> Changes between v3 and v4:
> - Move the loop for gmfn_range from arch_memory_op to
> xenmem_add_to_physmap.
> - Add a comment to comment to explain the purpose of
> iommu_dont_flush_iotlb.
>
> Changes between v2 and v3:
> - Check for the presence iotlb_flush_all callback before calling it.
>
> Changes between v1 and v2:
> - Move size in struct xen_add_to_physmap in padding between .domid and
> .space.
> - Store iommu_dont_flush per cpu
> - Change the code in hvmloader to relocate by batch of 64K, .size is now
> 16 bits.
>
> Jean Guyader (6):
> vtd: Refactor iotlb flush code
> iommu: Introduce iommu_flush and iommu_flush_all.
> add_to_physmap: Move the code for XENMEM_add_to_physmap
> mm: New XENMEM space, XENMAPSPACE_gmfn_range
> hvmloader: Change memory relocation loop when overlap with PCI hole
> Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
> iotlb flush
>
> tools/firmware/hvmloader/pci.c | 20 +++-
> xen/arch/x86/mm.c | 212
> +++++++++++++++++++++++------------
> xen/arch/x86/x86_64/compat/mm.c | 18 +++
> xen/drivers/passthrough/iommu.c | 25 ++++
> xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++-------
> xen/include/public/memory.h | 4 +
> xen/include/xen/iommu.h | 17 +++
> 7 files changed, 277 insertions(+), 119 deletions(-)
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8)
2011-11-17 9:20 ` [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Keir Fraser
@ 2011-11-17 9:42 ` Jan Beulich
0 siblings, 0 replies; 34+ messages in thread
From: Jan Beulich @ 2011-11-17 9:42 UTC (permalink / raw)
To: Jean Guyader, Keir Fraser; +Cc: tim, xen-devel, allen.m.kay
>>> On 17.11.11 at 10:20, Keir Fraser <keir@xen.org> wrote:
> On 16/11/2011 19:25, "Jean Guyader" <jean.guyader@eu.citrix.com> wrote:
>
>>
>> In one of my previous email I detailed a bug I was seeing when passing
>> through a Intel GPU on a guest that has more that 4G or RAM.
>>
>> Allen suggested that I go for the Plan B but after a discussion with Tim
>> we agreed that Plan B was way to disruptive in term of code change.
>>
>> This patch series implements Plan A.
>
> I'll check these in when they get an Ack from Jan.
While they now look fine to me, I didn't really plan on acking these; I
would rather have Tim expected to for the mm parts, and I think
Allen did already do so for the passthrough changes.
Jan
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
@ 2011-11-19 21:58 ` Olaf Hering
2011-11-19 22:14 ` Keir Fraser
1 sibling, 1 reply; 34+ messages in thread
From: Olaf Hering @ 2011-11-19 21:58 UTC (permalink / raw)
To: Jean Guyader; +Cc: allen.m.kay, keir, xen-devel, tim, JBeulich
On Wed, Nov 16, Jean Guyader wrote:
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite failures.
(XEN) Assertion '!in_atomic()' failed at softirq.c:61
preempt_count is like fffffc52 or fffffc00 in my testing.
Olaf
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 21:58 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Olaf Hering
@ 2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
0 siblings, 2 replies; 34+ messages in thread
From: Keir Fraser @ 2011-11-19 22:14 UTC (permalink / raw)
To: Olaf Hering, Jean Guyader; +Cc: allen.m.kay, xen-devel, tim, JBeulich
On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
> On Wed, Nov 16, Jean Guyader wrote:
>
>> Move the code for the XENMEM_add_to_physmap case into it's own
>> function (xenmem_add_to_physmap).
>
> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> failures.
> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>
> preempt_count is like fffffc52 or fffffc00 in my testing.
Thanks, hopefully fixed by c/s 24167.
-- keir
> Olaf
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 22:14 ` Keir Fraser
@ 2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
1 sibling, 0 replies; 34+ messages in thread
From: Jean Guyader @ 2011-11-19 22:37 UTC (permalink / raw)
To: Keir Fraser
Cc: Olaf Hering, xen-devel, allen.m.kay, tim, Jean Guyader, JBeulich
On 19 November 2011 14:14, Keir Fraser <keir.xen@gmail.com> wrote:
> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>
>> On Wed, Nov 16, Jean Guyader wrote:
>>
>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>> function (xenmem_add_to_physmap).
>>
>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>> failures.
>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>
>> preempt_count is like fffffc52 or fffffc00 in my testing.
>
> Thanks, hopefully fixed by c/s 24167.
>
Thanks, sorry about that.
Jean
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
@ 2011-11-20 13:25 ` Olaf Hering
2011-11-20 19:59 ` Keir Fraser
1 sibling, 1 reply; 34+ messages in thread
From: Olaf Hering @ 2011-11-20 13:25 UTC (permalink / raw)
To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On Sat, Nov 19, Keir Fraser wrote:
> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>
> > On Wed, Nov 16, Jean Guyader wrote:
> >
> >> Move the code for the XENMEM_add_to_physmap case into it's own
> >> function (xenmem_add_to_physmap).
> >
> > This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> > failures.
> > (XEN) Assertion '!in_atomic()' failed at softirq.c:61
> >
> > preempt_count is like fffffc52 or fffffc00 in my testing.
>
> Thanks, hopefully fixed by c/s 24167.
Yes, the ASSERT does not trigger anymore.
The remaining issue is this:
Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only memory page. gfn=0xc0, mfn=0x201979
See
http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm-amd/serial-potato-beetle.log
Olaf
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-20 13:25 ` Olaf Hering
@ 2011-11-20 19:59 ` Keir Fraser
2011-11-21 8:39 ` Olaf Hering
0 siblings, 1 reply; 34+ messages in thread
From: Keir Fraser @ 2011-11-20 19:59 UTC (permalink / raw)
To: Olaf Hering; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On 20/11/2011 13:25, "Olaf Hering" <olaf@aepfle.de> wrote:
> On Sat, Nov 19, Keir Fraser wrote:
>
>> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>>
>>> On Wed, Nov 16, Jean Guyader wrote:
>>>
>>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>>> function (xenmem_add_to_physmap).
>>>
>>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>>> failures.
>>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>>
>>> preempt_count is like fffffc52 or fffffc00 in my testing.
>>
>> Thanks, hopefully fixed by c/s 24167.
>
> Yes, the ASSERT does not trigger anymore.
>
> The remaining issue is this:
>
> Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> memory page. gfn=0xc0, mfn=0x201979
Is that new behaviour? It may be unrelated to whatever HVM test failure
we're seeing, or else be a mere symptom of a guest gone haywire for other
reasons (we write-protect that memory range because it is supposed to be
ROM).
-- Keir
> See
> http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm
> -amd/serial-potato-beetle.log
>
> Olaf
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-20 19:59 ` Keir Fraser
@ 2011-11-21 8:39 ` Olaf Hering
0 siblings, 0 replies; 34+ messages in thread
From: Olaf Hering @ 2011-11-21 8:39 UTC (permalink / raw)
To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On Sun, Nov 20, Keir Fraser wrote:
> > Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> > memory page. gfn=0xc0, mfn=0x201979
>
> Is that new behaviour? It may be unrelated to whatever HVM test failure
> we're seeing, or else be a mere symptom of a guest gone haywire for other
> reasons (we write-protect that memory range because it is supposed to be
> ROM).
The message does not trigger with changeset 24162, but it does with
24167.
Olaf
^ permalink raw reply [flat|nested] 34+ messages in thread
* Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-16 19:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
@ 2012-08-01 17:55 ` Stefano Stabellini
2012-08-01 18:06 ` Keir Fraser
` (2 more replies)
1 sibling, 3 replies; 34+ messages in thread
From: Stefano Stabellini @ 2012-08-01 17:55 UTC (permalink / raw)
To: Jean Guyader
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
Konrad Rzeszutek Wilk, Tim (Xen.org), allen.m.kay@intel.com,
Jean Guyader, JBeulich@suse.com, Attilio Rao
On Wed, 16 Nov 2011, Jean Guyader wrote:
>
> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
> a range of pages. The size of the range is defined in a new field.
>
> This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions.
>
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Hi all,
I was reading more about this commit because this patch breaks the ABI
on ARM, when I realized that on x86 there is no standard that specifies
the alignment of fields in a struct.
As a consequence I don't think we can really be sure that between .domid
and .space we always have 16 bits of padding.
I am afraid that if a user compiles Linux or another guest kernel with a
compiler other than gcc, this hypercall might break. In fact it already
happened just switching from x86 to ARM.
Also, considering that the memory.h interface is supposed to be ANSI C,
isn't it wrong to assume compiler specific artifacts anyway?
Considering that we haven't made any releases yet with the change in
memory.h, shouldn't we revert the commit before it is too late?
- Stefano
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
@ 2012-08-01 18:06 ` Keir Fraser
2012-08-02 9:25 ` Jan Beulich
2012-08-01 18:08 ` Tim Deegan
2012-08-02 9:23 ` Jan Beulich
2 siblings, 1 reply; 34+ messages in thread
From: Keir Fraser @ 2012-08-01 18:06 UTC (permalink / raw)
To: Stefano Stabellini, Jean Guyader
Cc: xen-devel@lists.xensource.com, Konrad Rzeszutek Wilk,
Tim (Xen.org), allen.m.kay@intel.com, Jean Guyader,
JBeulich@suse.com, Attilio Rao
On 01/08/2012 18:55, "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
wrote:
> On Wed, 16 Nov 2011, Jean Guyader wrote:
>>
>> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
>> a range of pages. The size of the range is defined in a new field.
>>
>> This new field .size is located in the 16 bits padding between .domid
>> and .space in struct xen_add_to_physmap to stay compatible with older
>> versions.
>>
>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
>
> Hi all,
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.
Well, on x86 there *is* 16 bits of padding between the .domid and .space
fields. That's our ABI. Regardless of whether we rely on not-really-existent
padding rules in the compiler. If someone compiles in an environment that
aligns stuff differently, we have to rewrite our headers. :)
We don't have a supported ABI on ARM until 4.2.0 is released at the
earliest.
Should we be changing our rules on public headers, allowing
compiler-specific extensions to precisely lay out our structures? Quite
possibly. We used to do that, but it got shot down by the ppc and ia64
arches who wanted an easier life and just rely on compiler default layout
for a particular platform. Of course those maintainers aren't actually
voting any more. ;)
-- Keir
> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?
> Considering that we haven't made any releases yet with the change in
> memory.h, shouldn't we revert the commit before it is too late?
>
> - Stefano
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
2012-08-01 18:06 ` Keir Fraser
@ 2012-08-01 18:08 ` Tim Deegan
2012-08-02 9:23 ` Jan Beulich
2 siblings, 0 replies; 34+ messages in thread
From: Tim Deegan @ 2012-08-01 18:08 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
Konrad Rzeszutek Wilk, allen.m.kay@intel.com, Jean Guyader,
Jean Guyader, JBeulich@suse.com, Attilio Rao
At 18:55 +0100 on 01 Aug (1343847341), Stefano Stabellini wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.
> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
AIUI, reverting this patch won't solve that problem - either a compiler adds
16 bytes of padding (as GCC and clang do), in which case we can use that
area of the new argument, or it doesn't, in which case it's already not
compatible with a GCC-built xen.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?
Yes - people writing Windows drivers have this sort of alignment/padding
issue already in a number of places. But since, as you say, there's no
standard describing this, the best we can do is keep any _new_ interfaces
size-aligned and explicitly sized.
Tim.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
2012-08-01 18:06 ` Keir Fraser
2012-08-01 18:08 ` Tim Deegan
@ 2012-08-02 9:23 ` Jan Beulich
2012-08-02 9:45 ` Attilio Rao
2012-08-02 13:13 ` Stefano Stabellini
2 siblings, 2 replies; 34+ messages in thread
From: Jan Beulich @ 2012-08-02 9:23 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
Konrad Rzeszutek Wilk, Tim(Xen.org), allen.m.kay@intel.com,
Jean Guyader, Jean Guyader, Attilio Rao
>>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
There is - the psABI supplements to the SVR4 ABI.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.
It would be very strange for a modern ABI (other than perhaps
ones targeting exclusively embedded environments, where space
matters) to allow structure fields at mis-aligned offsets. Is that
really the case for ARM? This would make the compiled code
accessing such fields pretty ugly, since I seem to recall that loads
and stores are required to be aligned there.
> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?
This is not compiler specific, but platform defined. Compilers
merely need to conform to that specification; if they don't they
can't be used for building Xen interfacing code without manual
tweaking (perhaps re-creation) of the interface headers.
Jan
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-01 18:06 ` Keir Fraser
@ 2012-08-02 9:25 ` Jan Beulich
0 siblings, 0 replies; 34+ messages in thread
From: Jan Beulich @ 2012-08-02 9:25 UTC (permalink / raw)
To: Jean Guyader, Stefano Stabellini, Keir Fraser
Cc: xen-devel@lists.xensource.com, Konrad Rzeszutek Wilk,
allen.m.kay@intel.com, Tim (Xen.org), Jean Guyader, Attilio Rao
>>> On 01.08.12 at 20:06, Keir Fraser <keir@xen.org> wrote:
> Should we be changing our rules on public headers, allowing
> compiler-specific extensions to precisely lay out our structures? Quite
> possibly. We used to do that, but it got shot down by the ppc and ia64
> arches who wanted an easier life and just rely on compiler default layout
> for a particular platform. Of course those maintainers aren't actually
> voting any more. ;)
In the expectation that other architectures might get ported,
keeping the headers as generic as possible is likely the best
thing to do. After all it was (hopefully!) for a reason that the
PPC and/or IA64 folks wanted the non-standard stuff to be
removed.
Jan
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 9:23 ` Jan Beulich
@ 2012-08-02 9:45 ` Attilio Rao
2012-08-02 10:22 ` Jan Beulich
2012-08-02 13:13 ` Stefano Stabellini
1 sibling, 1 reply; 34+ messages in thread
From: Attilio Rao @ 2012-08-02 9:45 UTC (permalink / raw)
To: Jan Beulich
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
Konrad Rzeszutek Wilk, Tim (Xen.org), Stefano Stabellini,
allen.m.kay@intel.com, Jean Guyader, Jean Guyader (3P)
On 02/08/12 10:23, Jan Beulich wrote:
>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com> wrote:
>>>>
>> I was reading more about this commit because this patch breaks the ABI
>> on ARM, when I realized that on x86 there is no standard that specifies
>> the alignment of fields in a struct.
>>
> There is - the psABI supplements to the SVR4 ABI.
>
>
This is a completely different issue.
The problem here gcc/whatever compiler padding added to the struct in
order to have alignment of the members to the word boundry. The
difference is that this is not enforced in the ARM case (apparently,
from Stefano's report) while it happens in the x86 case.
This is why it is a good rule to organize member of a struct from the
bigger to the smaller when compiling with gcc and this is not the case
of the struct in question.
In the end it is a compiler decisional thing, not something decided by
the ABI.
Attilio
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 9:45 ` Attilio Rao
@ 2012-08-02 10:22 ` Jan Beulich
2012-08-02 10:35 ` Attilio Rao
0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2012-08-02 10:22 UTC (permalink / raw)
To: Attilio Rao
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
KonradRzeszutek Wilk, Tim (Xen.org), Stefano Stabellini,
allen.m.kay@intel.com, Jean Guyader, Jean Guyader (3P)
>>> On 02.08.12 at 11:45, Attilio Rao <attilio.rao@citrix.com> wrote:
> On 02/08/12 10:23, Jan Beulich wrote:
>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> wrote:
>>>>>
>>> I was reading more about this commit because this patch breaks the ABI
>>> on ARM, when I realized that on x86 there is no standard that specifies
>>> the alignment of fields in a struct.
>>>
>> There is - the psABI supplements to the SVR4 ABI.
>>
>>
>
> This is a completely different issue.
> The problem here gcc/whatever compiler padding added to the struct in
> order to have alignment of the members to the word boundry. The
> difference is that this is not enforced in the ARM case (apparently,
> from Stefano's report) while it happens in the x86 case.
>
> This is why it is a good rule to organize member of a struct from the
> bigger to the smaller when compiling with gcc and this is not the case
> of the struct in question.
>
> In the end it is a compiler decisional thing, not something decided by
> the ABI.
No, definitely not. Otherwise inter-operation between code
compiled with different compilers would be impossible. To
allow this is what the various ABI specifications exist for (and
their absence had, e.g. on DOS, lead to a complete mess).
As to the ARM issue - mind pointing out where mis-aligned
structure fields are specified as being the standard?
Jan
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 10:22 ` Jan Beulich
@ 2012-08-02 10:35 ` Attilio Rao
2012-08-02 12:57 ` Attilio Rao
0 siblings, 1 reply; 34+ messages in thread
From: Attilio Rao @ 2012-08-02 10:35 UTC (permalink / raw)
To: Jan Beulich
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
KonradRzeszutek Wilk, Tim (Xen.org), Stefano Stabellini,
allen.m.kay@intel.com, Jean Guyader, Jean Guyader (3P)
On 02/08/12 11:22, Jan Beulich wrote:
>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com> wrote:
>>>>
>> On 02/08/12 10:23, Jan Beulich wrote:
>>
>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>
>> wrote:
>>
>>>>>>
>>>>>>
>>>> I was reading more about this commit because this patch breaks the ABI
>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>> the alignment of fields in a struct.
>>>>
>>>>
>>> There is - the psABI supplements to the SVR4 ABI.
>>>
>>>
>>>
>> This is a completely different issue.
>> The problem here gcc/whatever compiler padding added to the struct in
>> order to have alignment of the members to the word boundry. The
>> difference is that this is not enforced in the ARM case (apparently,
>> from Stefano's report) while it happens in the x86 case.
>>
>> This is why it is a good rule to organize member of a struct from the
>> bigger to the smaller when compiling with gcc and this is not the case
>> of the struct in question.
>>
>> In the end it is a compiler decisional thing, not something decided by
>> the ABI.
>>
> No, definitely not. Otherwise inter-operation between code
> compiled with different compilers would be impossible. To
> allow this is what the various ABI specifications exist for (and
> their absence had, e.g. on DOS, lead to a complete mess).
>
>
Look, I'm speaking about the problem Stefano is trying to crunch which
has nothing to do with your discussion on ABI.
> As to the ARM issue - mind pointing out where mis-aligned
> structure fields are specified as being the standard?
>
>
I think that alignment is important, infact I'm more surprised to the
ARM side than the x86. Of course, because this is a compiler-dependent
behaviour (the fact that not only gcc does that doesn't mean it is
"standardized", just like it is not standardized anywhere that stack on
x86 must be word aligned, even if it is so common that it is taken for
granted now).
I was wondering, maybe ARM is compiled with -fpack-struct (even if I
would be surprised)?
Attilio
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 10:35 ` Attilio Rao
@ 2012-08-02 12:57 ` Attilio Rao
0 siblings, 0 replies; 34+ messages in thread
From: Attilio Rao @ 2012-08-02 12:57 UTC (permalink / raw)
To: xen-devel
On 02/08/12 11:35, Attilio Rao wrote:
> On 02/08/12 11:22, Jan Beulich wrote:
>
>>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com> wrote:
>>>>>
>>>>>
>>> On 02/08/12 10:23, Jan Beulich wrote:
>>>
>>>
>>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>>
>>>>>>>
>>> wrote:
>>>
>>>
>>>>>>>
>>>>>>>
>>>>> I was reading more about this commit because this patch breaks the ABI
>>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>>> the alignment of fields in a struct.
>>>>>
>>>>>
>>>>>
>>>> There is - the psABI supplements to the SVR4 ABI.
>>>>
>>>>
>>>>
>>>>
>>> This is a completely different issue.
>>> The problem here gcc/whatever compiler padding added to the struct in
>>> order to have alignment of the members to the word boundry. The
>>> difference is that this is not enforced in the ARM case (apparently,
>>> from Stefano's report) while it happens in the x86 case.
>>>
>>> This is why it is a good rule to organize member of a struct from the
>>> bigger to the smaller when compiling with gcc and this is not the case
>>> of the struct in question.
>>>
>>> In the end it is a compiler decisional thing, not something decided by
>>> the ABI.
>>>
>>>
>> No, definitely not. Otherwise inter-operation between code
>> compiled with different compilers would be impossible. To
>> allow this is what the various ABI specifications exist for (and
>> their absence had, e.g. on DOS, lead to a complete mess).
>>
>>
>>
> Look, I'm speaking about the problem Stefano is trying to crunch which
> has nothing to do with your discussion on ABI.
>
>
>> As to the ARM issue - mind pointing out where mis-aligned
>> structure fields are specified as being the standard?
>>
>>
>>
> I think that alignment is important, infact I'm more surprised to the
> ARM side than the x86. Of course, because this is a compiler-dependent
> behaviour (the fact that not only gcc does that doesn't mean it is
> "standardized", just like it is not standardized anywhere that stack on
> x86 must be word aligned, even if it is so common that it is taken for
> granted now).
>
>
It seems that I was missing something -- the x86 psABI explicitly
mention about the structures padding for the internal members of
structures and unions (Chapter 3, Paragraph 3.1.2, subsection
"Aggregates and Unions"), so this behaviour really cames from the SVR4,
as Jan pointed out.
Attilio
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 9:23 ` Jan Beulich
2012-08-02 9:45 ` Attilio Rao
@ 2012-08-02 13:13 ` Stefano Stabellini
2012-08-02 13:36 ` Jan Beulich
1 sibling, 1 reply; 34+ messages in thread
From: Stefano Stabellini @ 2012-08-02 13:13 UTC (permalink / raw)
To: Jan Beulich
Cc: xen-devel@lists.xensource.com, Keir (Xen.org), Stefano Stabellini,
Tim (Xen.org), Konrad Rzeszutek Wilk, allen.m.kay@intel.com,
Jean Guyader, Jean Guyader (3P), Attilio Rao
On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > I was reading more about this commit because this patch breaks the ABI
> > on ARM, when I realized that on x86 there is no standard that specifies
> > the alignment of fields in a struct.
>
> There is - the psABI supplements to the SVR4 ABI.
Thank you very much, that document was exactly what I was looking for.
Also it explains where my confusion was coming from: Jean's patch doesn't
break the ABI on ARM or x86, but I am carrying a patch in my patch queue
that does (unless Jean's patch is applied):
http://marc.info/?l=xen-devel&m=134305777903771
As you can see this patch splits .space into two shorts, and as a side
effect changes the offset of .space, removing the padding.
Thus it led me to think that Jean's patch was breaking the ABI when actually
with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
required to keep the binary interface compatible.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 13:13 ` Stefano Stabellini
@ 2012-08-02 13:36 ` Jan Beulich
2012-08-02 13:38 ` Stefano Stabellini
0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2012-08-02 13:36 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel@lists.xensource.com, Keir (Xen.org),
Konrad Rzeszutek Wilk, Tim (Xen.org), allen.m.kay@intel.com,
JeanGuyader, Jean Guyader (3P), Attilio Rao
>>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Thu, 2 Aug 2012, Jan Beulich wrote:
>> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
>> > I was reading more about this commit because this patch breaks the ABI
>> > on ARM, when I realized that on x86 there is no standard that specifies
>> > the alignment of fields in a struct.
>>
>> There is - the psABI supplements to the SVR4 ABI.
>
> Thank you very much, that document was exactly what I was looking for.
>
> Also it explains where my confusion was coming from: Jean's patch doesn't
> break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> that does (unless Jean's patch is applied):
>
> http://marc.info/?l=xen-devel&m=134305777903771
>
> As you can see this patch splits .space into two shorts, and as a side
> effect changes the offset of .space, removing the padding.
> Thus it led me to think that Jean's patch was breaking the ABI when actually
> with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> required to keep the binary interface compatible.
And then you wouldn't need to split 'space' and break the ABI
at all, you could simply put 'size' and 'foreign_domid' into a union.
Jan
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
2012-08-02 13:36 ` Jan Beulich
@ 2012-08-02 13:38 ` Stefano Stabellini
0 siblings, 0 replies; 34+ messages in thread
From: Stefano Stabellini @ 2012-08-02 13:38 UTC (permalink / raw)
To: Jan Beulich
Cc: xen-devel@lists.xensource.com, Keir (Xen.org), Stefano Stabellini,
Tim (Xen.org), Konrad Rzeszutek Wilk, allen.m.kay@intel.com,
JeanGuyader, Jean Guyader (3P), Attilio Rao
On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Thu, 2 Aug 2012, Jan Beulich wrote:
> >> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> >> > I was reading more about this commit because this patch breaks the ABI
> >> > on ARM, when I realized that on x86 there is no standard that specifies
> >> > the alignment of fields in a struct.
> >>
> >> There is - the psABI supplements to the SVR4 ABI.
> >
> > Thank you very much, that document was exactly what I was looking for.
> >
> > Also it explains where my confusion was coming from: Jean's patch doesn't
> > break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> > that does (unless Jean's patch is applied):
> >
> > http://marc.info/?l=xen-devel&m=134305777903771
> >
> > As you can see this patch splits .space into two shorts, and as a side
> > effect changes the offset of .space, removing the padding.
> > Thus it led me to think that Jean's patch was breaking the ABI when actually
> > with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> > required to keep the binary interface compatible.
>
> And then you wouldn't need to split 'space' and break the ABI
> at all, you could simply put 'size' and 'foreign_domid' into a union.
Yes, that's a good suggestion.
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2012-08-02 13:38 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-16 19:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-16 19:25 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2012-08-01 17:55 ` Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"? Stefano Stabellini
2012-08-01 18:06 ` Keir Fraser
2012-08-02 9:25 ` Jan Beulich
2012-08-01 18:08 ` Tim Deegan
2012-08-02 9:23 ` Jan Beulich
2012-08-02 9:45 ` Attilio Rao
2012-08-02 10:22 ` Jan Beulich
2012-08-02 10:35 ` Attilio Rao
2012-08-02 12:57 ` Attilio Rao
2012-08-02 13:13 ` Stefano Stabellini
2012-08-02 13:36 ` Jan Beulich
2012-08-02 13:38 ` Stefano Stabellini
2011-11-19 21:58 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Olaf Hering
2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
2011-11-20 19:59 ` Keir Fraser
2011-11-21 8:39 ` Olaf Hering
2011-11-17 9:20 ` [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Keir Fraser
2011-11-17 9:42 ` Jan Beulich
-- strict thread matches above, loose matches on Subject: below --
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-13 17:40 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-13 17:40 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-13 17:40 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-10 8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-10 8:44 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-10 8:44 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-10 8:44 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-08 20:04 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 20:04 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-08 20:04 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 18:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 18:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 18:25 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
2011-11-07 18:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-07 15:16 IOMMU, vtd and iotlb flush rework (v2) Jean Guyader
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 15:16 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 15:16 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 15:16 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
2011-11-07 15:16 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-04 10:38 [PATCH 0/6] IOMMU, vtd and iotlb flush rework Jean Guyader
2011-11-04 10:38 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-04 10:38 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-04 10:38 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-04 10:38 ` [PATCH 4/6] mm: Add new map space for add_to_physmap, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-04 10:38 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-04 14:52 ` Keir Fraser
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).