xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH][1/4] Enable 1GB for Xen HVM host page
@ 2010-02-22 17:17 Wei Huang
  2010-02-23  7:56 ` Keir Fraser
  0 siblings, 1 reply; 5+ messages in thread
From: Wei Huang @ 2010-02-22 17:17 UTC (permalink / raw)
  To: 'xen-devel@lists.xensource.com', Keir Fraser,
	Xu, Dongxiao

[-- Attachment #1: Type: text/plain, Size: 188 bytes --]

To support 1GB host page, this patch changes the MMIO starting address 
to 1GB boundary.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>


[-- Attachment #2: 1-xen-hap-change-mmio-space.patch --]
[-- Type: text/x-patch, Size: 777 bytes --]

# HG changeset patch
# User huangwei@huangwei.amd.com
# Date 1266853444 21600
# Node ID 1d166c5703256ab97225c6ae46ac87dd5bd07e89
# Parent  94e009ef5a58c02d4fe78fcc4c85627b469ee937
change the beginning address of MMIO space to 1GB boundary

diff -r 94e009ef5a58 -r 1d166c570325 xen/include/public/hvm/e820.h
--- a/xen/include/public/hvm/e820.h	Mon Feb 22 10:08:10 2010 +0000
+++ b/xen/include/public/hvm/e820.h	Mon Feb 22 09:44:04 2010 -0600
@@ -27,7 +27,7 @@
 #define HVM_E820_NR_OFFSET   0x000001E8
 #define HVM_E820_OFFSET      0x000002D0
 
-#define HVM_BELOW_4G_RAM_END        0xF0000000
+#define HVM_BELOW_4G_RAM_END        0xC0000000
 #define HVM_BELOW_4G_MMIO_START     HVM_BELOW_4G_RAM_END
 #define HVM_BELOW_4G_MMIO_LENGTH    ((1ULL << 32) - HVM_BELOW_4G_MMIO_START)
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH][1/4] Enable 1GB for Xen HVM host page
  2010-02-22 17:17 [PATCH][1/4] Enable 1GB for Xen HVM host page Wei Huang
@ 2010-02-23  7:56 ` Keir Fraser
  2010-02-23 15:03   ` Huang2, Wei
  0 siblings, 1 reply; 5+ messages in thread
From: Keir Fraser @ 2010-02-23  7:56 UTC (permalink / raw)
  To: Wei Huang, 'xen-devel@lists.xensource.com', Xu, Dongxiao

What's the performance gain from 1GB mappings in HAP tables like? What about
just the (possible) extra 1GB mapping from this particular patch? Is it
worth making guest-visible efforts like this (admittedly small) one? After
all, the probably most frequently accessed 1GB starting at address 0x0
cannot be done with a 1GB mapping and we live with it.

 -- Keir

On 22/02/2010 17:17, "Wei Huang" <wei.huang2@amd.com> wrote:

> To support 1GB host page, this patch changes the MMIO starting address
> to 1GB boundary.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Wei Huang <wei.huang2@amd.com>
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH][1/4] Enable 1GB for Xen HVM host page
  2010-02-23  7:56 ` Keir Fraser
@ 2010-02-23 15:03   ` Huang2, Wei
  2010-02-24  4:25     ` Xu, Dongxiao
  0 siblings, 1 reply; 5+ messages in thread
From: Huang2, Wei @ 2010-02-23 15:03 UTC (permalink / raw)
  To: Keir Fraser, 'xen-devel@lists.xensource.com',
	Xu, Dongxiao

The performance gain depends on applications. We have seen 5% performance improvement for certain benchmarks. But some have no perf. gains over 2MB (no performance degradation either). That is why I add an option to turn it off in case people don't want 1GB.

I didn't have access to my guest QEMU until applying this patch from Dongxiao. Dongxiao might have other comments.

-Wei

-----Original Message-----
From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] 
Sent: Tuesday, February 23, 2010 1:57 AM
To: Huang2, Wei; 'xen-devel@lists.xensource.com'; Xu, Dongxiao
Subject: Re: [PATCH][1/4] Enable 1GB for Xen HVM host page 

What's the performance gain from 1GB mappings in HAP tables like? What about
just the (possible) extra 1GB mapping from this particular patch? Is it
worth making guest-visible efforts like this (admittedly small) one? After
all, the probably most frequently accessed 1GB starting at address 0x0
cannot be done with a 1GB mapping and we live with it.

 -- Keir

On 22/02/2010 17:17, "Wei Huang" <wei.huang2@amd.com> wrote:

> To support 1GB host page, this patch changes the MMIO starting address
> to 1GB boundary.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Wei Huang <wei.huang2@amd.com>
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH][1/4] Enable 1GB for Xen HVM host page
  2010-02-23 15:03   ` Huang2, Wei
@ 2010-02-24  4:25     ` Xu, Dongxiao
  2010-02-24  9:12       ` Keir Fraser
  0 siblings, 1 reply; 5+ messages in thread
From: Xu, Dongxiao @ 2010-02-24  4:25 UTC (permalink / raw)
  To: Keir Fraser, 'xen-devel@lists.xensource.com'; +Cc: Huang2, Wei

Huang2, Wei wrote:
> The performance gain depends on applications. We have seen 5%
> performance improvement for certain benchmarks. But some have no
> perf. gains over 2MB (no performance degradation either). That is why
> I add an option to turn it off in case people don't want 1GB.   
> 
> I didn't have access to my guest QEMU until applying this patch from
> Dongxiao. Dongxiao might have other comments. 

When allocating guest memory without the patch, Xend will treat 3G-4G
as normal memory, ignoring the 3.75-4G MMIO hole. This patch modifies
MMIO start address to be 1GB aligned and could fix the issue.

Indeed, this patch may have some influence to pure 32bit guest,
changing its max memory from 3.75G to 3G.

Anyway, we can change it to alloc 2M pages between 3G and 4G if you
think this particular patch is improper. 

Thanks!
Dongxiao


> 
> -Wei
> 
> -----Original Message-----
> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
> Sent: Tuesday, February 23, 2010 1:57 AM
> To: Huang2, Wei; 'xen-devel@lists.xensource.com'; Xu, Dongxiao
> Subject: Re: [PATCH][1/4] Enable 1GB for Xen HVM host page
> 
> What's the performance gain from 1GB mappings in HAP tables like?
> What about 
> just the (possible) extra 1GB mapping from this particular patch? Is
> it 
> worth making guest-visible efforts like this (admittedly small) one?
> After 
> all, the probably most frequently accessed 1GB starting at address 0x0
> cannot be done with a 1GB mapping and we live with it.
> 
>  -- Keir
> 
> On 22/02/2010 17:17, "Wei Huang" <wei.huang2@amd.com> wrote:
> 
>> To support 1GB host page, this patch changes the MMIO starting
>> address to 1GB boundary. 
>> 
>> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
>> Signed-off-by: Wei Huang <wei.huang2@amd.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH][1/4] Enable 1GB for Xen HVM host page
  2010-02-24  4:25     ` Xu, Dongxiao
@ 2010-02-24  9:12       ` Keir Fraser
  0 siblings, 0 replies; 5+ messages in thread
From: Keir Fraser @ 2010-02-24  9:12 UTC (permalink / raw)
  To: Xu, Dongxiao, 'xen-devel@lists.xensource.com'; +Cc: Huang2, Wei

On 24/02/2010 04:25, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:

> When allocating guest memory without the patch, Xend will treat 3G-4G
> as normal memory, ignoring the 3.75-4G MMIO hole. This patch modifies
> MMIO start address to be 1GB aligned and could fix the issue.
> 
> Indeed, this patch may have some influence to pure 32bit guest,
> changing its max memory from 3.75G to 3G.
> 
> Anyway, we can change it to alloc 2M pages between 3G and 4G if you
> think this particular patch is improper.

I would expect it to fall back to 2MB extents, and from there back to 4kB
extents as necessary, for any range where: (a) the range to be allocated for
is not 1GB-sized and -aligned; or (b) no 1GB extents are available from
Xen's free pool.

 -- Keir

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-02-24  9:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-22 17:17 [PATCH][1/4] Enable 1GB for Xen HVM host page Wei Huang
2010-02-23  7:56 ` Keir Fraser
2010-02-23 15:03   ` Huang2, Wei
2010-02-24  4:25     ` Xu, Dongxiao
2010-02-24  9:12       ` Keir Fraser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).