* [PATCH v5 0/1] Add mmio_hole_size (was Add pci_hole_min_size)
@ 2014-09-11 16:20 Don Slutz
2014-09-11 16:20 ` [PATCH v5 1/1] Add mmio_hole_size Don Slutz
0 siblings, 1 reply; 14+ messages in thread
From: Don Slutz @ 2014-09-11 16:20 UTC (permalink / raw)
To: xen-devel
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson, Don Slutz,
Jan Beulich, Boris Ostrovsky
Changes from v4 to v5:
Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.
Changes from v3 to v4:
Jan Beulich:
s/pci_hole_min_size/mmio_hole_size/
s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
Add ignore to hvmloader message
Adjust commit message
Dropped min; stopped mmio_hole_size changing in hvmloader
Open question: Should I switch this to max_ram_below_4g or stay with
pci_hole_min_size?
This may help or fix http://bugs.xenproject.org/xen/bug/28
Changes from v2 to v3:
Dropped RFC since QEMU change will be in QEMU 2.1.0 release.
Dropped "if (dom_info.debug)" in parse_config_data()
since dom_info no long passed in.
Lots more bounds checking. Since QEMU is a min(qemu_limit,
max_ram_below_4g) adjust code the do the same.
Boris Ostrovsky:
More in commit message.
Changes from v1 to v2:
Boris Ostrovsky:
Need to init pci_hole_min_size
Changed the qemu name from pci_hole_min_size to
pc-memory-layout.max-ram-below-4g.
Did not change the Xen version for now.
Added quick note to xl.cfg.pod.5
Added a check for a too big value. (Most likely in the wrong
place.)
If you add enough PCI devices then all mmio may not fit below 4G
which may not be the layout the user wanted. This allows you to
increase the below 4G address space that PCI devices can use and
therefore in more cases not have any mmio that is above 4G.
There are real PCI cards that do not support mmio over 4G, so if you
want to emulate them precisely, you may also need to increase the
space below 4G for them. There are drivers for these cards that also
do not work if they have their mmio space mapped above 4G.
This is posted as an RFC because you need the upstream version of
qemu with all 4 patches:
http://marc.info/?l=qemu-devel&m=139455360016654&w=2
This allows growing the pci_hole to the size needed.
This may help with using pci passthru and HVM.
Don Slutz (1):
Add mmio_hole_size
docs/man/xl.cfg.pod.5 | 12 +++++++
tools/firmware/hvmloader/config.h | 1 -
tools/firmware/hvmloader/pci.c | 71 +++++++++++++++++++++++++++------------
tools/libxc/xc_hvm_build_x86.c | 30 +++++++++++++++--
tools/libxc/xenguest.h | 11 ++++++
tools/libxl/libxl.h | 5 +++
tools/libxl/libxl_create.c | 29 ++++++++++++----
tools/libxl/libxl_dm.c | 24 +++++++++++--
tools/libxl/libxl_dom.c | 3 +-
tools/libxl/libxl_types.idl | 1 +
tools/libxl/xl_cmdimpl.c | 13 +++++++
11 files changed, 166 insertions(+), 34 deletions(-)
--
1.8.4
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v5 1/1] Add mmio_hole_size
2014-09-11 16:20 [PATCH v5 0/1] Add mmio_hole_size (was Add pci_hole_min_size) Don Slutz
@ 2014-09-11 16:20 ` Don Slutz
2014-09-30 1:27 ` Boris Ostrovsky
2014-09-30 13:15 ` George Dunlap
0 siblings, 2 replies; 14+ messages in thread
From: Don Slutz @ 2014-09-11 16:20 UTC (permalink / raw)
To: xen-devel
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson, Don Slutz,
Jan Beulich, Boris Ostrovsky
If you add enough PCI devices then all mmio may not fit below 4G
which may not be the layout the user wanted. This allows you to
increase the below 4G address space that PCI devices can use and
therefore in more cases not have any mmio that is above 4G.
There are real PCI cards that do not support mmio over 4G, so if you
want to emulate them precisely, you may also need to increase the
space below 4G for them. There are drivers for these cards that also
do not work if they have their mmio space mapped above 4G.
This allows growing the MMIO hole to the size needed.
This may help with using pci passthru and HVM.
Signed-off-by: Don Slutz <dslutz@verizon.com>
---
v5:
Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.
v4:
s/pci_hole_min_size/mmio_hole_size/
s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
Add ignore to hvmloader message
Adjust commit message
Dropped min; stopped mmio_hole_size changing in hvmloader
docs/man/xl.cfg.pod.5 | 12 +++++++
tools/firmware/hvmloader/config.h | 1 -
tools/firmware/hvmloader/pci.c | 71 +++++++++++++++++++++++++++------------
tools/libxc/xc_hvm_build_x86.c | 30 +++++++++++++++--
tools/libxc/xenguest.h | 11 ++++++
tools/libxl/libxl.h | 5 +++
tools/libxl/libxl_create.c | 29 ++++++++++++----
tools/libxl/libxl_dm.c | 24 +++++++++++--
tools/libxl/libxl_dom.c | 3 +-
tools/libxl/libxl_types.idl | 1 +
tools/libxl/xl_cmdimpl.c | 13 +++++++
11 files changed, 166 insertions(+), 34 deletions(-)
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 517ae2f..45f7857 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1111,6 +1111,18 @@ wallclock (i.e., real) time.
=back
+=head3 Memory layout
+
+=over 4
+
+=item B<mmio_hole_size=BYTES>
+
+Specifies the size the MMIO hole below 4GiB will be. Only valid for
+device_model_version = "qemu-xen". Note: this requires QEMU 2.1.0
+or later.
+
+=back
+
=head3 Support for Paravirtualisation of HVM Guests
The following options allow Paravirtualised features (such as devices)
diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h
index 80bea46..b838cf9 100644
--- a/tools/firmware/hvmloader/config.h
+++ b/tools/firmware/hvmloader/config.h
@@ -53,7 +53,6 @@ extern struct bios_config ovmf_config;
#define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected */
/* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
-#define PCI_MEM_START 0xf0000000
#define PCI_MEM_END 0xfc000000
extern unsigned long pci_mem_start, pci_mem_end;
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 3712988..e5cd2e3 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -28,9 +28,10 @@
#include <xen/memory.h>
#include <xen/hvm/ioreq.h>
#include <xen/hvm/hvm_xs_strings.h>
+#include <xen/hvm/e820.h>
#include <stdbool.h>
-unsigned long pci_mem_start = PCI_MEM_START;
+unsigned long pci_mem_start = HVM_BELOW_4G_MMIO_START;
unsigned long pci_mem_end = PCI_MEM_END;
uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
@@ -59,6 +60,7 @@ void pci_setup(void)
uint64_t bar_sz;
} *bars = (struct bars *)scratch_start;
unsigned int i, nr_bars = 0;
+ uint64_t mmio_hole_size = 0;
const char *s;
/*
@@ -86,6 +88,10 @@ void pci_setup(void)
printf("Relocating guest memory for lowmem MMIO space %s\n",
allow_memory_relocate?"enabled":"disabled");
+ s = xenstore_read("platform/mmio_hole_size", NULL);
+ if ( s )
+ mmio_hole_size = strtoll(s, NULL, 0);
+
/* Program PCI-ISA bridge with appropriate link routes. */
isa_irq = 0;
for ( link = 0; link < 4; link++ )
@@ -237,26 +243,49 @@ void pci_setup(void)
pci_writew(devfn, PCI_COMMAND, cmd);
}
- /*
- * At the moment qemu-xen can't deal with relocated memory regions.
- * It's too close to the release to make a proper fix; for now,
- * only allow the MMIO hole to grow large enough to move guest memory
- * if we're running qemu-traditional. Items that don't fit will be
- * relocated into the 64-bit address space.
- *
- * This loop now does the following:
- * - If allow_memory_relocate, increase the MMIO hole until it's
- * big enough, or until it's 2GiB
- * - If !allow_memory_relocate, increase the MMIO hole until it's
- * big enough, or until it's 2GiB, or until it overlaps guest
- * memory
- */
- while ( (mmio_total > (pci_mem_end - pci_mem_start))
- && ((pci_mem_start << 1) != 0)
- && (allow_memory_relocate
- || (((pci_mem_start << 1) >> PAGE_SHIFT)
- >= hvm_info->low_mem_pgend)) )
- pci_mem_start <<= 1;
+ if ( mmio_hole_size )
+ {
+ uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
+
+ if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
+ {
+ printf("max_ram_below_4g=0x"PRIllx
+ " too big for mmio_hole_size=0x"PRIllx
+ " has been ignored.\n",
+ PRIllx_arg(max_ram_below_4g),
+ PRIllx_arg(mmio_hole_size));
+ }
+ else
+ {
+ pci_mem_start = max_ram_below_4g;
+ printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
+ pci_mem_start, HVM_BELOW_4G_MMIO_START,
+ (long)mmio_hole_size);
+ }
+ }
+ else
+ {
+ /*
+ * At the moment qemu-xen can't deal with relocated memory regions.
+ * It's too close to the release to make a proper fix; for now,
+ * only allow the MMIO hole to grow large enough to move guest memory
+ * if we're running qemu-traditional. Items that don't fit will be
+ * relocated into the 64-bit address space.
+ *
+ * This loop now does the following:
+ * - If allow_memory_relocate, increase the MMIO hole until it's
+ * big enough, or until it's 2GiB
+ * - If !allow_memory_relocate, increase the MMIO hole until it's
+ * big enough, or until it's 2GiB, or until it overlaps guest
+ * memory
+ */
+ while ( (mmio_total > (pci_mem_end - pci_mem_start))
+ && ((pci_mem_start << 1) != 0)
+ && (allow_memory_relocate
+ || (((pci_mem_start << 1) >> PAGE_SHIFT)
+ >= hvm_info->low_mem_pgend)) )
+ pci_mem_start <<= 1;
+ }
if ( mmio_total > (pci_mem_end - pci_mem_start) )
{
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index c81a25b..94da7fa 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -628,14 +628,40 @@ int xc_hvm_build_target_mem(xc_interface *xch,
int target,
const char *image_name)
{
+ return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
+ image_name, 0);
+}
+
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+ struct xc_hvm_build_args *args,
+ uint64_t mmio_hole_size)
+{
+ if (mmio_hole_size)
+ {
+ uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
+
+ if (max_ram_below_4g < HVM_BELOW_4G_MMIO_START)
+ {
+ args->mmio_size = mmio_hole_size;
+ }
+ }
+ return xc_hvm_build(xch, domid, args);
+}
+
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+ uint32_t domid,
+ int memsize,
+ int target,
+ const char *image_name,
+ uint64_t mmio_hole_size)
+{
struct xc_hvm_build_args args = {};
memset(&args, 0, sizeof(struct xc_hvm_build_args));
args.mem_size = (uint64_t)memsize << 20;
args.mem_target = (uint64_t)target << 20;
args.image_file_name = image_name;
-
- return xc_hvm_build(xch, domid, &args);
+ return xc_hvm_build_with_hole(xch, domid, &args, mmio_hole_size);
}
/*
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index 40bbac8..d35f38d 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -244,12 +244,23 @@ struct xc_hvm_build_args {
int xc_hvm_build(xc_interface *xch, uint32_t domid,
struct xc_hvm_build_args *hvm_args);
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+ struct xc_hvm_build_args *args,
+ uint64_t mmio_hole_size);
+
int xc_hvm_build_target_mem(xc_interface *xch,
uint32_t domid,
int memsize,
int target,
const char *image_name);
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+ uint32_t domid,
+ int memsize,
+ int target,
+ const char *image_name,
+ uint64_t mmio_hole_size);
+
/*
* Sets *lockfd to -1.
* Has deallocated everything even on error.
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index dab3a67..e43d2d2 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -138,6 +138,11 @@
#define LIBXL_HAVE_BUILDINFO_IOMEM_START_GFN 1
/*
+ * libxl_domain_build_info has the u.hvm.mmio_hole_size field.
+ */
+#define LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE 1
+
+/*
* libxl ABI compatibility
*
* The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index b36c719..7383053 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -23,6 +23,7 @@
#include <xc_dom.h>
#include <xenguest.h>
#include <xen/hvm/hvm_info_table.h>
+#include <xen/hvm/e820.h>
int libxl__domain_create_info_setdefault(libxl__gc *gc,
libxl_domain_create_info *c_info)
@@ -428,13 +429,26 @@ int libxl__domain_build(libxl__gc *gc,
vments[4] = "start_time";
vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
- localents = libxl__calloc(gc, 7, sizeof(char *));
- localents[0] = "platform/acpi";
- localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
- localents[2] = "platform/acpi_s3";
- localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
- localents[4] = "platform/acpi_s4";
- localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
+ localents = libxl__calloc(gc, 9, sizeof(char *));
+ i = 0;
+ localents[i++] = "platform/acpi";
+ localents[i++] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
+ localents[i++] = "platform/acpi_s3";
+ localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
+ localents[i++] = "platform/acpi_s4";
+ localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
+ if (info->u.hvm.mmio_hole_size) {
+ uint64_t max_ram_below_4g =
+ (1ULL << 32) - info->u.hvm.mmio_hole_size;
+
+ if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
+ localents[i++] = "platform/mmio_hole_size";
+ localents[i++] =
+ libxl__sprintf(gc, "%"PRIu64,
+ info->u.hvm.mmio_hole_size);
+ }
+ }
+ assert(i < 9);
break;
case LIBXL_DOMAIN_TYPE_PV:
@@ -460,6 +474,7 @@ int libxl__domain_build(libxl__gc *gc,
vments[i++] = "image/cmdline";
vments[i++] = (char *) state->pv_cmdline;
}
+ assert(i < 11);
break;
default:
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 103cbca..099a960 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -18,6 +18,7 @@
#include "libxl_osdeps.h" /* must come before any other headers */
#include "libxl_internal.h"
+#include <xen/hvm/e820.h>
static const char *libxl_tapif_script(libxl__gc *gc)
{
@@ -415,6 +416,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
const libxl_sdl_info *sdl = dm_sdl(guest_config);
const char *keymap = dm_keymap(guest_config);
+ char *machinearg;
flexarray_t *dm_args;
int i;
uint64_t ram_size;
@@ -696,10 +698,28 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
/* Switching here to the machine "pc" which does not add
* the xen-platform device instead of the default "xenfv" machine.
*/
- flexarray_append(dm_args, "pc,accel=xen");
+ machinearg = libxl__sprintf(gc, "pc,accel=xen");
} else {
- flexarray_append(dm_args, "xenfv");
+ machinearg = libxl__sprintf(gc, "xenfv");
}
+ if (b_info->u.hvm.mmio_hole_size) {
+ uint64_t max_ram_below_4g = (1ULL << 32) -
+ b_info->u.hvm.mmio_hole_size;
+
+ if (max_ram_below_4g > HVM_BELOW_4G_MMIO_START) {
+ LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
+ "max-ram-below-4g=%"PRIu64
+ " (mmio_hole_size=%"PRIu64
+ ") too big > %u ignored.\n",
+ max_ram_below_4g,
+ b_info->u.hvm.mmio_hole_size,
+ HVM_BELOW_4G_MMIO_START);
+ } else {
+ machinearg = libxl__sprintf(gc, "%s,max-ram-below-4g=%"PRIu64,
+ machinearg, max_ram_below_4g);
+ }
+ }
+ flexarray_append(dm_args, machinearg);
for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
flexarray_append(dm_args, b_info->extra_hvm[i]);
break;
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 0dfdb08..a3a56f5 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -728,7 +728,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
goto out;
}
- ret = xc_hvm_build(ctx->xch, domid, &args);
+ ret = xc_hvm_build_with_hole(ctx->xch, domid, &args,
+ info->u.hvm.mmio_hole_size);
if (ret) {
LOGEV(ERROR, ret, "hvm building failed");
goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f1fcbc3..c912b18 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -378,6 +378,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
("timeoffset", string),
("hpet", libxl_defbool),
("vpt_align", libxl_defbool),
+ ("mmio_hole_size", UInt(64, init_val = 0)),
("timer_mode", libxl_timer_mode),
("nested_hvm", libxl_defbool),
("smbios_firmware", string),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 8a38077..6c74fa9 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -34,6 +34,7 @@
#include <ctype.h>
#include <inttypes.h>
#include <limits.h>
+#include <xen/hvm/e820.h>
#include "libxl.h"
#include "libxl_utils.h"
@@ -1036,6 +1037,18 @@ static void parse_config_data(const char *config_source,
xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
+ if (!xlu_cfg_get_long(config, "mmio_hole_size", &l, 0)) {
+ b_info->u.hvm.mmio_hole_size = (uint64_t) l;
+ if (b_info->u.hvm.mmio_hole_size < HVM_BELOW_4G_MMIO_LENGTH ||
+ b_info->u.hvm.mmio_hole_size > HVM_BELOW_4G_MMIO_START) {
+ fprintf(stderr, "ERROR: invalid value 0x%"PRIx64" (%"PRIu64
+ ") for \"mmio_hole_size\"\n",
+ b_info->u.hvm.mmio_hole_size,
+ b_info->u.hvm.mmio_hole_size);
+ exit (1);
+ }
+ }
+
if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
const char *s = libxl_timer_mode_to_string(l);
fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "
--
1.8.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-11 16:20 ` [PATCH v5 1/1] Add mmio_hole_size Don Slutz
@ 2014-09-30 1:27 ` Boris Ostrovsky
2014-09-30 13:22 ` George Dunlap
2014-10-01 9:39 ` Slutz, Donald Christopher
2014-09-30 13:15 ` George Dunlap
1 sibling, 2 replies; 14+ messages in thread
From: Boris Ostrovsky @ 2014-09-30 1:27 UTC (permalink / raw)
To: Don Slutz, xen-devel
Cc: Ian Jackson, Ian Campbell, Jan Beulich, Stefano Stabellini
On 09/11/2014 12:20 PM, Don Slutz wrote:
> If you add enough PCI devices then all mmio may not fit below 4G
> which may not be the layout the user wanted. This allows you to
> increase the below 4G address space that PCI devices can use and
> therefore in more cases not have any mmio that is above 4G.
>
> There are real PCI cards that do not support mmio over 4G, so if you
> want to emulate them precisely, you may also need to increase the
> space below 4G for them. There are drivers for these cards that also
> do not work if they have their mmio space mapped above 4G.
>
> This allows growing the MMIO hole to the size needed.
>
> This may help with using pci passthru and HVM.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
> v5:
> Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.
> v4:
> s/pci_hole_min_size/mmio_hole_size/
> s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
> Add ignore to hvmloader message
> Adjust commit message
> Dropped min; stopped mmio_hole_size changing in hvmloader
>
> docs/man/xl.cfg.pod.5 | 12 +++++++
> tools/firmware/hvmloader/config.h | 1 -
>
> tools/firmware/hvmloader/pci.c | 71 +++++++++++++++++++++++++++------------
> tools/libxc/xc_hvm_build_x86.c | 30 +++++++++++++++--
> tools/libxc/xenguest.h | 11 ++++++
> tools/libxl/libxl.h | 5 +++
> tools/libxl/libxl_create.c | 29 ++++++++++++----
> tools/libxl/libxl_dm.c | 24 +++++++++++--
> tools/libxl/libxl_dom.c | 3 +-
> tools/libxl/libxl_types.idl | 1 +
> tools/libxl/xl_cmdimpl.c | 13 +++++++
> 11 files changed, 166 insertions(+), 34 deletions(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 517ae2f..45f7857 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1111,6 +1111,18 @@ wallclock (i.e., real) time.
>
> =back
>
> +=head3 Memory layout
> +
> +=over 4
> +
> +=item B<mmio_hole_size=BYTES>
> +
> +Specifies the size the MMIO hole below 4GiB will be. Only valid for
> +device_model_version = "qemu-xen". Note: this requires QEMU 2.1.0
> +or later.
Can you add text describing restrictions on hole size? Not smaller than
256MB, I think.
> +
> +=back
> +
> =head3 Support for Paravirtualisation of HVM Guests
>
> The following options allow Paravirtualised features (such as devices)
> diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h
> index 80bea46..b838cf9 100644
> --- a/tools/firmware/hvmloader/config.h
> +++ b/tools/firmware/hvmloader/config.h
> @@ -53,7 +53,6 @@ extern struct bios_config ovmf_config;
> #define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected */
>
> /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> -#define PCI_MEM_START 0xf0000000
> #define PCI_MEM_END 0xfc000000
>
> extern unsigned long pci_mem_start, pci_mem_end;
> diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
> index 3712988..e5cd2e3 100644
> --- a/tools/firmware/hvmloader/pci.c
> +++ b/tools/firmware/hvmloader/pci.c
> @@ -28,9 +28,10 @@
> #include <xen/memory.h>
> #include <xen/hvm/ioreq.h>
> #include <xen/hvm/hvm_xs_strings.h>
> +#include <xen/hvm/e820.h>
> #include <stdbool.h>
>
> -unsigned long pci_mem_start = PCI_MEM_START;
> +unsigned long pci_mem_start = HVM_BELOW_4G_MMIO_START;
> unsigned long pci_mem_end = PCI_MEM_END;
> uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
>
> @@ -59,6 +60,7 @@ void pci_setup(void)
> uint64_t bar_sz;
> } *bars = (struct bars *)scratch_start;
> unsigned int i, nr_bars = 0;
> + uint64_t mmio_hole_size = 0;
>
> const char *s;
> /*
> @@ -86,6 +88,10 @@ void pci_setup(void)
> printf("Relocating guest memory for lowmem MMIO space %s\n",
> allow_memory_relocate?"enabled":"disabled");
>
> + s = xenstore_read("platform/mmio_hole_size", NULL);
> + if ( s )
> + mmio_hole_size = strtoll(s, NULL, 0);
> +
> /* Program PCI-ISA bridge with appropriate link routes. */
> isa_irq = 0;
> for ( link = 0; link < 4; link++ )
> @@ -237,26 +243,49 @@ void pci_setup(void)
> pci_writew(devfn, PCI_COMMAND, cmd);
> }
>
> - /*
> - * At the moment qemu-xen can't deal with relocated memory regions.
> - * It's too close to the release to make a proper fix; for now,
> - * only allow the MMIO hole to grow large enough to move guest memory
> - * if we're running qemu-traditional. Items that don't fit will be
> - * relocated into the 64-bit address space.
> - *
> - * This loop now does the following:
> - * - If allow_memory_relocate, increase the MMIO hole until it's
> - * big enough, or until it's 2GiB
> - * - If !allow_memory_relocate, increase the MMIO hole until it's
> - * big enough, or until it's 2GiB, or until it overlaps guest
> - * memory
> - */
> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
> - && ((pci_mem_start << 1) != 0)
> - && (allow_memory_relocate
> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
> - >= hvm_info->low_mem_pgend)) )
> - pci_mem_start <<= 1;
> + if ( mmio_hole_size )
> + {
> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
> +
> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
> + {
> + printf("max_ram_below_4g=0x"PRIllx
> + " too big for mmio_hole_size=0x"PRIllx
> + " has been ignored.\n",
> + PRIllx_arg(max_ram_below_4g),
> + PRIllx_arg(mmio_hole_size));
> + }
Do you need to check whether the hole is too large?
Here and in the toolstack.
> + else
> + {
> + pci_mem_start = max_ram_below_4g;
> + printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
> + (long)mmio_hole_size);
> + }
> + }
> + else
> + {
> + /*
> + * At the moment qemu-xen can't deal with relocated memory regions.
> + * It's too close to the release to make a proper fix; for now,
> + * only allow the MMIO hole to grow large enough to move guest memory
> + * if we're running qemu-traditional. Items that don't fit will be
> + * relocated into the 64-bit address space.
> + *
> + * This loop now does the following:
> + * - If allow_memory_relocate, increase the MMIO hole until it's
> + * big enough, or until it's 2GiB
> + * - If !allow_memory_relocate, increase the MMIO hole until it's
> + * big enough, or until it's 2GiB, or until it overlaps guest
> + * memory
> + */
> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
> + && ((pci_mem_start << 1) != 0)
> + && (allow_memory_relocate
> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
> + >= hvm_info->low_mem_pgend)) )
> + pci_mem_start <<= 1;
> + }
>
> if ( mmio_total > (pci_mem_end - pci_mem_start) )
> {
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index c81a25b..94da7fa 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -628,14 +628,40 @@ int xc_hvm_build_target_mem(xc_interface *xch,
> int target,
> const char *image_name)
> {
> + return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
> + image_name, 0);
> +}
> +
> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
> + struct xc_hvm_build_args *args,
> + uint64_t mmio_hole_size)
> +{
> + if (mmio_hole_size)
> + {
> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
> +
> + if (max_ram_below_4g < HVM_BELOW_4G_MMIO_START)
"<=", to be consistent with the check in
libxl__domain_create_info_setdefault ().
> + {
> + args->mmio_size = mmio_hole_size;
> + }
> + }
> + return xc_hvm_build(xch, domid, args);
> +}
> +
> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
> + uint32_t domid,
> + int memsize,
> + int target,
> + const char *image_name,
> + uint64_t mmio_hole_size)
> +{
> struct xc_hvm_build_args args = {};
>
> memset(&args, 0, sizeof(struct xc_hvm_build_args));
> args.mem_size = (uint64_t)memsize << 20;
> args.mem_target = (uint64_t)target << 20;
> args.image_file_name = image_name;
> -
> - return xc_hvm_build(xch, domid, &args);
> + return xc_hvm_build_with_hole(xch, domid, &args, mmio_hole_size);
> }
>
> /*
> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> index 40bbac8..d35f38d 100644
> --- a/tools/libxc/xenguest.h
> +++ b/tools/libxc/xenguest.h
> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
> int xc_hvm_build(xc_interface *xch, uint32_t domid,
> struct xc_hvm_build_args *hvm_args);
>
> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
> + struct xc_hvm_build_args *args,
> + uint64_t mmio_hole_size);
> +
> int xc_hvm_build_target_mem(xc_interface *xch,
> uint32_t domid,
> int memsize,
> int target,
> const char *image_name);
>
> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
> + uint32_t domid,
> + int memsize,
> + int target,
> + const char *image_name,
> + uint64_t mmio_hole_size);
> +
> /*
> * Sets *lockfd to -1.
> * Has deallocated everything even on error.
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index dab3a67..e43d2d2 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -138,6 +138,11 @@
> #define LIBXL_HAVE_BUILDINFO_IOMEM_START_GFN 1
>
> /*
> + * libxl_domain_build_info has the u.hvm.mmio_hole_size field.
> + */
> +#define LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE 1
> +
> +/*
> * libxl ABI compatibility
> *
> * The only guarantee which libxl makes regarding ABI compatibility
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index b36c719..7383053 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -23,6 +23,7 @@
> #include <xc_dom.h>
> #include <xenguest.h>
> #include <xen/hvm/hvm_info_table.h>
> +#include <xen/hvm/e820.h>
>
> int libxl__domain_create_info_setdefault(libxl__gc *gc,
> libxl_domain_create_info *c_info)
> @@ -428,13 +429,26 @@ int libxl__domain_build(libxl__gc *gc,
> vments[4] = "start_time";
> vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
>
> - localents = libxl__calloc(gc, 7, sizeof(char *));
> - localents[0] = "platform/acpi";
> - localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
> - localents[2] = "platform/acpi_s3";
> - localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
> - localents[4] = "platform/acpi_s4";
> - localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
> + localents = libxl__calloc(gc, 9, sizeof(char *));
> + i = 0;
> + localents[i++] = "platform/acpi";
> + localents[i++] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
> + localents[i++] = "platform/acpi_s3";
> + localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
> + localents[i++] = "platform/acpi_s4";
> + localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
> + if (info->u.hvm.mmio_hole_size) {
> + uint64_t max_ram_below_4g =
> + (1ULL << 32) - info->u.hvm.mmio_hole_size;
> +
> + if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
You seem to be making various checks on hole size in multiple places ---
here, in libxl__build_device_model_args_new() and in
parse_config_data(). (And, to some degree, in xc_hvm_build_with_hole()).
Are they all necessary?
-boris
> + localents[i++] = "platform/mmio_hole_size";
> + localents[i++] =
> + libxl__sprintf(gc, "%"PRIu64,
> + info->u.hvm.mmio_hole_size);
> + }
> + }
> + assert(i < 9);
>
> break;
> case LIBXL_DOMAIN_TYPE_PV:
> @@ -460,6 +474,7 @@ int libxl__domain_build(libxl__gc *gc,
> vments[i++] = "image/cmdline";
> vments[i++] = (char *) state->pv_cmdline;
> }
> + assert(i < 11);
>
> break;
> default:
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index 103cbca..099a960 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -18,6 +18,7 @@
> #include "libxl_osdeps.h" /* must come before any other headers */
>
> #include "libxl_internal.h"
> +#include <xen/hvm/e820.h>
>
> static const char *libxl_tapif_script(libxl__gc *gc)
> {
> @@ -415,6 +416,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
> const libxl_sdl_info *sdl = dm_sdl(guest_config);
> const char *keymap = dm_keymap(guest_config);
> + char *machinearg;
> flexarray_t *dm_args;
> int i;
> uint64_t ram_size;
> @@ -696,10 +698,28 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> /* Switching here to the machine "pc" which does not add
> * the xen-platform device instead of the default "xenfv" machine.
> */
> - flexarray_append(dm_args, "pc,accel=xen");
> + machinearg = libxl__sprintf(gc, "pc,accel=xen");
> } else {
> - flexarray_append(dm_args, "xenfv");
> + machinearg = libxl__sprintf(gc, "xenfv");
> }
> + if (b_info->u.hvm.mmio_hole_size) {
> + uint64_t max_ram_below_4g = (1ULL << 32) -
> + b_info->u.hvm.mmio_hole_size;
> +
> + if (max_ram_below_4g > HVM_BELOW_4G_MMIO_START) {
> + LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
> + "max-ram-below-4g=%"PRIu64
> + " (mmio_hole_size=%"PRIu64
> + ") too big > %u ignored.\n",
> + max_ram_below_4g,
> + b_info->u.hvm.mmio_hole_size,
> + HVM_BELOW_4G_MMIO_START);
> + } else {
> + machinearg = libxl__sprintf(gc, "%s,max-ram-below-4g=%"PRIu64,
> + machinearg, max_ram_below_4g);
> + }
> + }
> + flexarray_append(dm_args, machinearg);
> for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
> flexarray_append(dm_args, b_info->extra_hvm[i]);
> break;
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 0dfdb08..a3a56f5 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -728,7 +728,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
> goto out;
> }
>
> - ret = xc_hvm_build(ctx->xch, domid, &args);
> + ret = xc_hvm_build_with_hole(ctx->xch, domid, &args,
> + info->u.hvm.mmio_hole_size);
> if (ret) {
> LOGEV(ERROR, ret, "hvm building failed");
> goto out;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index f1fcbc3..c912b18 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -378,6 +378,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
> ("timeoffset", string),
> ("hpet", libxl_defbool),
> ("vpt_align", libxl_defbool),
> + ("mmio_hole_size", UInt(64, init_val = 0)),
> ("timer_mode", libxl_timer_mode),
> ("nested_hvm", libxl_defbool),
> ("smbios_firmware", string),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 8a38077..6c74fa9 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -34,6 +34,7 @@
> #include <ctype.h>
> #include <inttypes.h>
> #include <limits.h>
> +#include <xen/hvm/e820.h>
>
> #include "libxl.h"
> #include "libxl_utils.h"
> @@ -1036,6 +1037,18 @@ static void parse_config_data(const char *config_source,
> xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
> xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
>
> + if (!xlu_cfg_get_long(config, "mmio_hole_size", &l, 0)) {
> + b_info->u.hvm.mmio_hole_size = (uint64_t) l;
> + if (b_info->u.hvm.mmio_hole_size < HVM_BELOW_4G_MMIO_LENGTH ||
> + b_info->u.hvm.mmio_hole_size > HVM_BELOW_4G_MMIO_START) {
> + fprintf(stderr, "ERROR: invalid value 0x%"PRIx64" (%"PRIu64
> + ") for \"mmio_hole_size\"\n",
> + b_info->u.hvm.mmio_hole_size,
> + b_info->u.hvm.mmio_hole_size);
> + exit (1);
> + }
> + }
> +
> if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
> const char *s = libxl_timer_mode_to_string(l);
> fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-11 16:20 ` [PATCH v5 1/1] Add mmio_hole_size Don Slutz
2014-09-30 1:27 ` Boris Ostrovsky
@ 2014-09-30 13:15 ` George Dunlap
2014-10-01 16:33 ` Don Slutz
1 sibling, 1 reply; 14+ messages in thread
From: George Dunlap @ 2014-09-30 13:15 UTC (permalink / raw)
To: Don Slutz
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson,
xen-devel@lists.xen.org, Jan Beulich, Boris Ostrovsky
On Thu, Sep 11, 2014 at 5:20 PM, Don Slutz <dslutz@verizon.com> wrote:
> If you add enough PCI devices then all mmio may not fit below 4G
> which may not be the layout the user wanted. This allows you to
> increase the below 4G address space that PCI devices can use and
> therefore in more cases not have any mmio that is above 4G.
>
> There are real PCI cards that do not support mmio over 4G, so if you
> want to emulate them precisely, you may also need to increase the
> space below 4G for them. There are drivers for these cards that also
> do not work if they have their mmio space mapped above 4G.
>
> This allows growing the MMIO hole to the size needed.
>
> This may help with using pci passthru and HVM.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
FYI you'll have to rebase this since it doesn't apply cleanly anymore.
Also, I've been meaning to look at this for months; sorry it's taken so long.
Overall looks fine, just a few comments:
> @@ -237,26 +243,49 @@ void pci_setup(void)
> pci_writew(devfn, PCI_COMMAND, cmd);
> }
>
> - /*
> - * At the moment qemu-xen can't deal with relocated memory regions.
> - * It's too close to the release to make a proper fix; for now,
> - * only allow the MMIO hole to grow large enough to move guest memory
> - * if we're running qemu-traditional. Items that don't fit will be
> - * relocated into the 64-bit address space.
> - *
> - * This loop now does the following:
> - * - If allow_memory_relocate, increase the MMIO hole until it's
> - * big enough, or until it's 2GiB
> - * - If !allow_memory_relocate, increase the MMIO hole until it's
> - * big enough, or until it's 2GiB, or until it overlaps guest
> - * memory
> - */
> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
> - && ((pci_mem_start << 1) != 0)
> - && (allow_memory_relocate
> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
> - >= hvm_info->low_mem_pgend)) )
> - pci_mem_start <<= 1;
> + if ( mmio_hole_size )
> + {
> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
> +
> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
> + {
> + printf("max_ram_below_4g=0x"PRIllx
> + " too big for mmio_hole_size=0x"PRIllx
> + " has been ignored.\n",
> + PRIllx_arg(max_ram_below_4g),
> + PRIllx_arg(mmio_hole_size));
> + }
> + else
> + {
> + pci_mem_start = max_ram_below_4g;
> + printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
> + (long)mmio_hole_size);
> + }
> + }
> + else
> + {
> + /*
> + * At the moment qemu-xen can't deal with relocated memory regions.
> + * It's too close to the release to make a proper fix; for now,
> + * only allow the MMIO hole to grow large enough to move guest memory
> + * if we're running qemu-traditional. Items that don't fit will be
> + * relocated into the 64-bit address space.
> + *
> + * This loop now does the following:
> + * - If allow_memory_relocate, increase the MMIO hole until it's
> + * big enough, or until it's 2GiB
> + * - If !allow_memory_relocate, increase the MMIO hole until it's
> + * big enough, or until it's 2GiB, or until it overlaps guest
> + * memory
> + */
> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
> + && ((pci_mem_start << 1) != 0)
> + && (allow_memory_relocate
> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
> + >= hvm_info->low_mem_pgend)) )
> + pci_mem_start <<= 1;
> + }
I don't think these need to be disjoint. There's no reason you
couldn't set the size for the default, and then allow the code to make
it bigger for guests which allow that.
But if this needs to be rushed to get in, this is fine with me.
>
> if ( mmio_total > (pci_mem_end - pci_mem_start) )
> {
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index c81a25b..94da7fa 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -628,14 +628,40 @@ int xc_hvm_build_target_mem(xc_interface *xch,
> int target,
> const char *image_name)
> {
> + return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
> + image_name, 0);
> +}
> +
> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
> + struct xc_hvm_build_args *args,
> + uint64_t mmio_hole_size)
> +{
> + if (mmio_hole_size)
> + {
> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
> +
> + if (max_ram_below_4g < HVM_BELOW_4G_MMIO_START)
> + {
> + args->mmio_size = mmio_hole_size;
> + }
> + }
> + return xc_hvm_build(xch, domid, args);
> +}
> +
> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
> + uint32_t domid,
> + int memsize,
> + int target,
> + const char *image_name,
> + uint64_t mmio_hole_size)
> +{
> struct xc_hvm_build_args args = {};
>
> memset(&args, 0, sizeof(struct xc_hvm_build_args));
> args.mem_size = (uint64_t)memsize << 20;
> args.mem_target = (uint64_t)target << 20;
> args.image_file_name = image_name;
> -
> - return xc_hvm_build(xch, domid, &args);
> + return xc_hvm_build_with_hole(xch, domid, &args, mmio_hole_size);
> }
>
> /*
> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> index 40bbac8..d35f38d 100644
> --- a/tools/libxc/xenguest.h
> +++ b/tools/libxc/xenguest.h
> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
> int xc_hvm_build(xc_interface *xch, uint32_t domid,
> struct xc_hvm_build_args *hvm_args);
>
> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
> + struct xc_hvm_build_args *args,
> + uint64_t mmio_hole_size);
> +
> int xc_hvm_build_target_mem(xc_interface *xch,
> uint32_t domid,
> int memsize,
> int target,
> const char *image_name);
>
> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
> + uint32_t domid,
> + int memsize,
> + int target,
> + const char *image_name,
> + uint64_t mmio_hole_size);
Why on earth do we need all of these extra functions? Particularly
ones like xc_hvm_build_target_mem_with_hole(), which isn't even called
by anyone, AFAICT?
libxc isn't a stable interface -- can't we just have the callers set
mmio_size in hvm_args and have them call xc_hvm_build()?
> @@ -696,10 +698,28 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> /* Switching here to the machine "pc" which does not add
> * the xen-platform device instead of the default "xenfv" machine.
> */
> - flexarray_append(dm_args, "pc,accel=xen");
> + machinearg = libxl__sprintf(gc, "pc,accel=xen");
> } else {
> - flexarray_append(dm_args, "xenfv");
> + machinearg = libxl__sprintf(gc, "xenfv");
> }
> + if (b_info->u.hvm.mmio_hole_size) {
> + uint64_t max_ram_below_4g = (1ULL << 32) -
> + b_info->u.hvm.mmio_hole_size;
> +
> + if (max_ram_below_4g > HVM_BELOW_4G_MMIO_START) {
> + LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
> + "max-ram-below-4g=%"PRIu64
> + " (mmio_hole_size=%"PRIu64
> + ") too big > %u ignored.\n",
> + max_ram_below_4g,
> + b_info->u.hvm.mmio_hole_size,
> + HVM_BELOW_4G_MMIO_START);
> + } else {
> + machinearg = libxl__sprintf(gc, "%s,max-ram-below-4g=%"PRIu64,
> + machinearg, max_ram_below_4g);
> + }
> + }
What happens if the version of qemu that's installed doesn't support
max-ram-below-4g?
-George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 1:27 ` Boris Ostrovsky
@ 2014-09-30 13:22 ` George Dunlap
2014-09-30 15:36 ` Boris Ostrovsky
2014-10-01 9:11 ` Slutz, Donald Christopher
2014-10-01 9:39 ` Slutz, Donald Christopher
1 sibling, 2 replies; 14+ messages in thread
From: George Dunlap @ 2014-09-30 13:22 UTC (permalink / raw)
To: Boris Ostrovsky
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson, Don Slutz,
xen-devel@lists.xen.org, Jan Beulich
On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
>> + if ( mmio_hole_size )
>> + {
>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>> +
>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>> + {
>> + printf("max_ram_below_4g=0x"PRIllx
>> + " too big for mmio_hole_size=0x"PRIllx
>> + " has been ignored.\n",
>> + PRIllx_arg(max_ram_below_4g),
>> + PRIllx_arg(mmio_hole_size));
>> + }
>
>
> Do you need to check whether the hole is too large?
>
> Here and in the toolstack.
How large is too large? I've seen real machines with a 3GiB memory hole...
>> "0";
>> + if (info->u.hvm.mmio_hole_size) {
>> + uint64_t max_ram_below_4g =
>> + (1ULL << 32) - info->u.hvm.mmio_hole_size;
>> +
>> + if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
>
>
> You seem to be making various checks on hole size in multiple places ---
> here, in libxl__build_device_model_args_new() and in parse_config_data().
> (And, to some degree, in xc_hvm_build_with_hole()). Are they all necessary?
I think we should get rid of xc_hvm_build_with_hole() entirely, but I
think we want a check everywhere else. The key question is when the
failure is going to happen ideally. You absolutely have to check it
at the bottom level (hvmloader). But you don't want to go through all
the hassle of setting up the domain and starting to boot it only to
find that some silly parameter is messed up, so it should also be
checked in libxl (since many people will be going through only libxl).
But if you're using xl and get an error at the libxl layer, the error
message is often a bit cryptic. So it makes sense to have it at the
parsing level too.
-George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 13:22 ` George Dunlap
@ 2014-09-30 15:36 ` Boris Ostrovsky
2014-09-30 16:06 ` George Dunlap
2014-10-01 9:11 ` Slutz, Donald Christopher
1 sibling, 1 reply; 14+ messages in thread
From: Boris Ostrovsky @ 2014-09-30 15:36 UTC (permalink / raw)
To: George Dunlap
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson, Don Slutz,
xen-devel@lists.xen.org, Jan Beulich
On 09/30/2014 09:22 AM, George Dunlap wrote:
> On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
>>> + if ( mmio_hole_size )
>>> + {
>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>> +
>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>> + {
>>> + printf("max_ram_below_4g=0x"PRIllx
>>> + " too big for mmio_hole_size=0x"PRIllx
>>> + " has been ignored.\n",
>>> + PRIllx_arg(max_ram_below_4g),
>>> + PRIllx_arg(mmio_hole_size));
>>> + }
>>
>> Do you need to check whether the hole is too large?
>>
>> Here and in the toolstack.
> How large is too large? I've seen real machines with a 3GiB memory hole...
But if mmio_hole_size is set to, say, 4GB we can't expect anything good,
right? I don't know what the reasonable upper limit on it should be but
it seems to me we need one. Especially given that it's a uint64_t.
-boris
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 15:36 ` Boris Ostrovsky
@ 2014-09-30 16:06 ` George Dunlap
2014-10-01 9:31 ` Slutz, Donald Christopher
0 siblings, 1 reply; 14+ messages in thread
From: George Dunlap @ 2014-09-30 16:06 UTC (permalink / raw)
To: Boris Ostrovsky
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson, Don Slutz,
xen-devel@lists.xen.org, Jan Beulich
On Tue, Sep 30, 2014 at 4:36 PM, Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
>
> On 09/30/2014 09:22 AM, George Dunlap wrote:
>>
>> On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
>> <boris.ostrovsky@oracle.com> wrote:
>>>>
>>>> + if ( mmio_hole_size )
>>>> + {
>>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>>> +
>>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>>> + {
>>>> + printf("max_ram_below_4g=0x"PRIllx
>>>> + " too big for mmio_hole_size=0x"PRIllx
>>>> + " has been ignored.\n",
>>>> + PRIllx_arg(max_ram_below_4g),
>>>> + PRIllx_arg(mmio_hole_size));
>>>> + }
>>>
>>>
>>> Do you need to check whether the hole is too large?
>>>
>>> Here and in the toolstack.
>>
>> How large is too large? I've seen real machines with a 3GiB memory
>> hole...
>
>
>
> But if mmio_hole_size is set to, say, 4GB we can't expect anything good,
> right? I don't know what the reasonable upper limit on it should be but it
> seems to me we need one. Especially given that it's a uint64_t.
Well the worst it can do is cause the guest to crash, I think, isn't it? :-)
If mmio_hole_size > 4G, then in the check above, max_ram_below_4g will
wrap around and end up higher than HVM_BELOW_4G_MMIO_START, and
nothing will be done.
-George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 13:22 ` George Dunlap
2014-09-30 15:36 ` Boris Ostrovsky
@ 2014-10-01 9:11 ` Slutz, Donald Christopher
2014-10-01 9:45 ` George Dunlap
1 sibling, 1 reply; 14+ messages in thread
From: Slutz, Donald Christopher @ 2014-10-01 9:11 UTC (permalink / raw)
To: George Dunlap
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson,
xen-devel@lists.xen.org, Jan Beulich, Boris Ostrovsky
On 09/30/14 09:22, George Dunlap wrote:
> On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
>>> + if ( mmio_hole_size )
>>> + {
>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>> +
>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>> + {
>>> + printf("max_ram_below_4g=0x"PRIllx
>>> + " too big for mmio_hole_size=0x"PRIllx
>>> + " has been ignored.\n",
>>> + PRIllx_arg(max_ram_below_4g),
>>> + PRIllx_arg(mmio_hole_size));
>>> + }
>>
>> Do you need to check whether the hole is too large?
>>
>> Here and in the toolstack.
> How large is too large? I've seen real machines with a 3GiB memory hole...
>
>>> "0";
>>> + if (info->u.hvm.mmio_hole_size) {
>>> + uint64_t max_ram_below_4g =
>>> + (1ULL << 32) - info->u.hvm.mmio_hole_size;
>>> +
>>> + if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
>>
>> You seem to be making various checks on hole size in multiple places ---
>> here, in libxl__build_device_model_args_new() and in parse_config_data().
>> (And, to some degree, in xc_hvm_build_with_hole()). Are they all necessary?
> I think we should get rid of xc_hvm_build_with_hole() entirely, but I
> think we want a check everywhere else. The key question is when the
> failure is going to happen ideally. You absolutely have to check it
> at the bottom level (hvmloader). But you don't want to go through all
> the hassle of setting up the domain and starting to boot it only to
> find that some silly parameter is messed up, so it should also be
> checked in libxl (since many people will be going through only libxl).
> But if you're using xl and get an error at the libxl layer, the error
> message is often a bit cryptic. So it makes sense to have it at the
> parsing level too.
>
> -George
Ok, I will look into removing xc_hvm_build_with_hole() entirely.
The rest of the statement looks like stuff I should have had in the
commit message. Are you fine with me adjusting and adding it?
-Don Slutz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 16:06 ` George Dunlap
@ 2014-10-01 9:31 ` Slutz, Donald Christopher
0 siblings, 0 replies; 14+ messages in thread
From: Slutz, Donald Christopher @ 2014-10-01 9:31 UTC (permalink / raw)
To: George Dunlap
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson,
xen-devel@lists.xen.org, Jan Beulich, Boris Ostrovsky
On 09/30/14 12:06, George Dunlap wrote:
> On Tue, Sep 30, 2014 at 4:36 PM, Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
>> On 09/30/2014 09:22 AM, George Dunlap wrote:
>>> On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
>>> <boris.ostrovsky@oracle.com> wrote:
>>>>> + if ( mmio_hole_size )
>>>>> + {
>>>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>>>> +
>>>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>>>> + {
>>>>> + printf("max_ram_below_4g=0x"PRIllx
>>>>> + " too big for mmio_hole_size=0x"PRIllx
>>>>> + " has been ignored.\n",
>>>>> + PRIllx_arg(max_ram_below_4g),
>>>>> + PRIllx_arg(mmio_hole_size));
>>>>> + }
>>>>
>>>> Do you need to check whether the hole is too large?
>>>>
>>>> Here and in the toolstack.
>>> How large is too large? I've seen real machines with a 3GiB memory
>>> hole...
It is hard to answer this question. Clearly not leaving enough ram to
load and run the BIOS is bad. Then there is the real mode size of 16MiB.
Next on the list is how much does the guest expect? Does it change
based on max ram?
Also just because Linux does not like the size, maybe something else
will work fine.
In my testing, 3GiB works fine. However I did not test for a guest > 64GiB
with the 3GiB. Also small machines (like < 8GiB) worked fine for
3.75GiB with
CentOS 5.3...
>>
>>
>> But if mmio_hole_size is set to, say, 4GB we can't expect anything good,
>> right? I don't know what the reasonable upper limit on it should be but it
>> seems to me we need one. Especially given that it's a uint64_t.
> Well the worst it can do is cause the guest to crash, I think, isn't it? :-)
Maybe even not get far enough to boot (grub fails without message)...
which is a form of a crash.
> If mmio_hole_size > 4G, then in the check above, max_ram_below_4g will
> wrap around and end up higher than HVM_BELOW_4G_MMIO_START, and
> nothing will be done.
Yup.
-Don Slutz
> -George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 1:27 ` Boris Ostrovsky
2014-09-30 13:22 ` George Dunlap
@ 2014-10-01 9:39 ` Slutz, Donald Christopher
1 sibling, 0 replies; 14+ messages in thread
From: Slutz, Donald Christopher @ 2014-10-01 9:39 UTC (permalink / raw)
To: Boris Ostrovsky
Cc: Stefano Stabellini, Ian Jackson, Ian Campbell, Jan Beulich,
xen-devel@lists.xen.org
On 09/29/14 21:27, Boris Ostrovsky wrote:
>
> On 09/11/2014 12:20 PM, Don Slutz wrote:
>> If you add enough PCI devices then all mmio may not fit below 4G
>> which may not be the layout the user wanted. This allows you to
>> increase the below 4G address space that PCI devices can use and
>> therefore in more cases not have any mmio that is above 4G.
>>
>> There are real PCI cards that do not support mmio over 4G, so if you
>> want to emulate them precisely, you may also need to increase the
>> space below 4G for them. There are drivers for these cards that also
>> do not work if they have their mmio space mapped above 4G.
>>
>> This allows growing the MMIO hole to the size needed.
>>
>> This may help with using pci passthru and HVM.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>> v5:
>> Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.
>> v4:
>> s/pci_hole_min_size/mmio_hole_size/
>> s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
>> Add ignore to hvmloader message
>> Adjust commit message
>> Dropped min; stopped mmio_hole_size changing in hvmloader
>>
>> docs/man/xl.cfg.pod.5 | 12 +++++++
>> tools/firmware/hvmloader/config.h | 1 -
>>
>> tools/firmware/hvmloader/pci.c | 71
>> +++++++++++++++++++++++++++------------
>> tools/libxc/xc_hvm_build_x86.c | 30 +++++++++++++++--
>> tools/libxc/xenguest.h | 11 ++++++
>> tools/libxl/libxl.h | 5 +++
>> tools/libxl/libxl_create.c | 29 ++++++++++++----
>> tools/libxl/libxl_dm.c | 24 +++++++++++--
>> tools/libxl/libxl_dom.c | 3 +-
>> tools/libxl/libxl_types.idl | 1 +
>> tools/libxl/xl_cmdimpl.c | 13 +++++++
>> 11 files changed, 166 insertions(+), 34 deletions(-)
>>
>> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
>> index 517ae2f..45f7857 100644
>> --- a/docs/man/xl.cfg.pod.5
>> +++ b/docs/man/xl.cfg.pod.5
>> @@ -1111,6 +1111,18 @@ wallclock (i.e., real) time.
>> =back
>> +=head3 Memory layout
>> +
>> +=over 4
>> +
>> +=item B<mmio_hole_size=BYTES>
>> +
>> +Specifies the size the MMIO hole below 4GiB will be. Only valid for
>> +device_model_version = "qemu-xen". Note: this requires QEMU 2.1.0
>> +or later.
>
> Can you add text describing restrictions on hole size? Not smaller
> than 256MB, I think.
>
Yes, I can add some things like "not smaller the 256MiB", "Not larger then
4GiB". There is no good statement on a max value (See other thread
for more details).
>> +
>> +=back
>> +
...
>> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
>> - && ((pci_mem_start << 1) != 0)
>> - && (allow_memory_relocate
>> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
>> - >= hvm_info->low_mem_pgend)) )
>> - pci_mem_start <<= 1;
>> + if ( mmio_hole_size )
>> + {
>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>> +
>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>> + {
>> + printf("max_ram_below_4g=0x"PRIllx
>> + " too big for mmio_hole_size=0x"PRIllx
>> + " has been ignored.\n",
>> + PRIllx_arg(max_ram_below_4g),
>> + PRIllx_arg(mmio_hole_size));
>> + }
>
> Do you need to check whether the hole is too large?
>
> Here and in the toolstack.
>
Not really. Being unsigned handles most of these cases. And I have
not found a good value to use. Just like memory does not say anything
about the low end.
>
>> + else
>> + {
>> + pci_mem_start = max_ram_below_4g;
>> + printf("pci_mem_start=0x%lx (was 0x%x) for
>> mmio_hole_size=%lu\n",
>> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
>> + (long)mmio_hole_size);
>> + }
>> + }
>> + else
>> + {
>> + /*
>> + * At the moment qemu-xen can't deal with relocated memory
>> regions.
>> + * It's too close to the release to make a proper fix; for now,
>> + * only allow the MMIO hole to grow large enough to move
>> guest memory
>> + * if we're running qemu-traditional. Items that don't fit
>> will be
>> + * relocated into the 64-bit address space.
>> + *
>> + * This loop now does the following:
>> + * - If allow_memory_relocate, increase the MMIO hole until
>> it's
>> + * big enough, or until it's 2GiB
>> + * - If !allow_memory_relocate, increase the MMIO hole until
>> it's
>> + * big enough, or until it's 2GiB, or until it overlaps guest
>> + * memory
>> + */
>> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
>> + && ((pci_mem_start << 1) != 0)
>> + && (allow_memory_relocate
>> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
>> + >= hvm_info->low_mem_pgend)) )
>> + pci_mem_start <<= 1;
>> + }
>> if ( mmio_total > (pci_mem_end - pci_mem_start) )
>> {
>> diff --git a/tools/libxc/xc_hvm_build_x86.c
>> b/tools/libxc/xc_hvm_build_x86.c
>> index c81a25b..94da7fa 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -628,14 +628,40 @@ int xc_hvm_build_target_mem(xc_interface *xch,
>> int target,
>> const char *image_name)
>> {
>> + return xc_hvm_build_target_mem_with_hole(xch, domid, memsize,
>> target,
>> + image_name, 0);
>> +}
>> +
>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>> + struct xc_hvm_build_args *args,
>> + uint64_t mmio_hole_size)
>> +{
>> + if (mmio_hole_size)
>> + {
>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>> +
>> + if (max_ram_below_4g < HVM_BELOW_4G_MMIO_START)
>
> "<=", to be consistent with the check in
> libxl__domain_create_info_setdefault ().
Ok.
>
>> + {
>> + args->mmio_size = mmio_hole_size;
>> + }
>> + }
>> + return xc_hvm_build(xch, domid, args);
>> +}
...
>> + if (info->u.hvm.mmio_hole_size) {
>> + uint64_t max_ram_below_4g =
>> + (1ULL << 32) - info->u.hvm.mmio_hole_size;
>> +
>> + if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
>
> You seem to be making various checks on hole size in multiple places
> --- here, in libxl__build_device_model_args_new() and in
> parse_config_data(). (And, to some degree, in
> xc_hvm_build_with_hole()). Are they all necessary?
>
No, but helpful.
-Don Slutz
> -boris
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-10-01 9:11 ` Slutz, Donald Christopher
@ 2014-10-01 9:45 ` George Dunlap
0 siblings, 0 replies; 14+ messages in thread
From: George Dunlap @ 2014-10-01 9:45 UTC (permalink / raw)
To: Slutz, Donald Christopher
Cc: Ian Campbell, Stefano Stabellini, Ian Jackson,
xen-devel@lists.xen.org, Jan Beulich, Boris Ostrovsky
On Wed, Oct 1, 2014 at 10:11 AM, Slutz, Donald Christopher
<dslutz@verizon.com> wrote:
> On 09/30/14 09:22, George Dunlap wrote:
>> On Tue, Sep 30, 2014 at 2:27 AM, Boris Ostrovsky
>> <boris.ostrovsky@oracle.com> wrote:
>>>> + if ( mmio_hole_size )
>>>> + {
>>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>>> +
>>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>>> + {
>>>> + printf("max_ram_below_4g=0x"PRIllx
>>>> + " too big for mmio_hole_size=0x"PRIllx
>>>> + " has been ignored.\n",
>>>> + PRIllx_arg(max_ram_below_4g),
>>>> + PRIllx_arg(mmio_hole_size));
>>>> + }
>>>
>>> Do you need to check whether the hole is too large?
>>>
>>> Here and in the toolstack.
>> How large is too large? I've seen real machines with a 3GiB memory hole...
>>
>>>> "0";
>>>> + if (info->u.hvm.mmio_hole_size) {
>>>> + uint64_t max_ram_below_4g =
>>>> + (1ULL << 32) - info->u.hvm.mmio_hole_size;
>>>> +
>>>> + if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
>>>
>>> You seem to be making various checks on hole size in multiple places ---
>>> here, in libxl__build_device_model_args_new() and in parse_config_data().
>>> (And, to some degree, in xc_hvm_build_with_hole()). Are they all necessary?
>> I think we should get rid of xc_hvm_build_with_hole() entirely, but I
>> think we want a check everywhere else. The key question is when the
>> failure is going to happen ideally. You absolutely have to check it
>> at the bottom level (hvmloader). But you don't want to go through all
>> the hassle of setting up the domain and starting to boot it only to
>> find that some silly parameter is messed up, so it should also be
>> checked in libxl (since many people will be going through only libxl).
>> But if you're using xl and get an error at the libxl layer, the error
>> message is often a bit cryptic. So it makes sense to have it at the
>> parsing level too.
>>
>> -George
>
> Ok, I will look into removing xc_hvm_build_with_hole() entirely.
>
> The rest of the statement looks like stuff I should have had in the
> commit message. Are you fine with me adjusting and adding it?
Er, I'm not sure what this question is about. :-) I don't think you
need information about where things are checked in the commit message
-- you mainly just need to give people information about what problem
you're solving and a high-level idea what it's doing; and sometimes an
idea which code the patch is touching if it's not clear. I think your
commit message is probably fine as it is.
-George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-09-30 13:15 ` George Dunlap
@ 2014-10-01 16:33 ` Don Slutz
2014-10-02 7:36 ` Jan Beulich
0 siblings, 1 reply; 14+ messages in thread
From: Don Slutz @ 2014-10-01 16:33 UTC (permalink / raw)
To: George Dunlap, Don Slutz, Ian Campbell
Cc: Boris Ostrovsky, Stefano Stabellini, Ian Jackson, Jan Beulich,
xen-devel@lists.xen.org
On 09/30/14 09:15, George Dunlap wrote:
> On Thu, Sep 11, 2014 at 5:20 PM, Don Slutz <dslutz@verizon.com> wrote:
>> If you add enough PCI devices then all mmio may not fit below 4G
>> which may not be the layout the user wanted. This allows you to
>> increase the below 4G address space that PCI devices can use and
>> therefore in more cases not have any mmio that is above 4G.
>>
>> There are real PCI cards that do not support mmio over 4G, so if you
>> want to emulate them precisely, you may also need to increase the
>> space below 4G for them. There are drivers for these cards that also
>> do not work if they have their mmio space mapped above 4G.
>>
>> This allows growing the MMIO hole to the size needed.
>>
>> This may help with using pci passthru and HVM.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
> FYI you'll have to rebase this since it doesn't apply cleanly anymore.
Or, I have rebased.
> Also, I've been meaning to look at this for months; sorry it's taken so long.
>
> Overall looks fine, just a few comments:
>
>> @@ -237,26 +243,49 @@ void pci_setup(void)
>> pci_writew(devfn, PCI_COMMAND, cmd);
>> }
>>
>> - /*
>> - * At the moment qemu-xen can't deal with relocated memory regions.
>> - * It's too close to the release to make a proper fix; for now,
>> - * only allow the MMIO hole to grow large enough to move guest memory
>> - * if we're running qemu-traditional. Items that don't fit will be
>> - * relocated into the 64-bit address space.
>> - *
>> - * This loop now does the following:
>> - * - If allow_memory_relocate, increase the MMIO hole until it's
>> - * big enough, or until it's 2GiB
>> - * - If !allow_memory_relocate, increase the MMIO hole until it's
>> - * big enough, or until it's 2GiB, or until it overlaps guest
>> - * memory
>> - */
>> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
>> - && ((pci_mem_start << 1) != 0)
>> - && (allow_memory_relocate
>> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
>> - >= hvm_info->low_mem_pgend)) )
>> - pci_mem_start <<= 1;
>> + if ( mmio_hole_size )
>> + {
>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>> +
>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>> + {
>> + printf("max_ram_below_4g=0x"PRIllx
>> + " too big for mmio_hole_size=0x"PRIllx
>> + " has been ignored.\n",
>> + PRIllx_arg(max_ram_below_4g),
>> + PRIllx_arg(mmio_hole_size));
>> + }
>> + else
>> + {
>> + pci_mem_start = max_ram_below_4g;
>> + printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
>> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
>> + (long)mmio_hole_size);
>> + }
>> + }
>> + else
>> + {
>> + /*
>> + * At the moment qemu-xen can't deal with relocated memory regions.
>> + * It's too close to the release to make a proper fix; for now,
>> + * only allow the MMIO hole to grow large enough to move guest memory
>> + * if we're running qemu-traditional. Items that don't fit will be
>> + * relocated into the 64-bit address space.
>> + *
>> + * This loop now does the following:
>> + * - If allow_memory_relocate, increase the MMIO hole until it's
>> + * big enough, or until it's 2GiB
>> + * - If !allow_memory_relocate, increase the MMIO hole until it's
>> + * big enough, or until it's 2GiB, or until it overlaps guest
>> + * memory
>> + */
>> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
>> + && ((pci_mem_start << 1) != 0)
>> + && (allow_memory_relocate
>> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
>> + >= hvm_info->low_mem_pgend)) )
>> + pci_mem_start <<= 1;
>> + }
> I don't think these need to be disjoint. There's no reason you
> couldn't set the size for the default, and then allow the code to make
> it bigger for guests which allow that.
The support for changing mmio_hole_size is still "missing" from QEMU.
So this code only works for qemu-traditional. I think Jan said
back on v1 or v2 (sorry, e-mail issues) that since this is a config,
disable the auto changing code.
> But if this needs to be rushed to get in, this is fine with me.
:)
>> if ( mmio_total > (pci_mem_end - pci_mem_start) )
>> {
>> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> index c81a25b..94da7fa 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -628,14 +628,40 @@ int xc_hvm_build_target_mem(xc_interface *xch,
>> int target,
>> const char *image_name)
>> {
>> + return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
>> + image_name, 0);
>> +}
>> +
>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>> + struct xc_hvm_build_args *args,
>> + uint64_t mmio_hole_size)
>> +{
>> + if (mmio_hole_size)
>> + {
>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>> +
>> + if (max_ram_below_4g < HVM_BELOW_4G_MMIO_START)
>> + {
>> + args->mmio_size = mmio_hole_size;
>> + }
>> + }
>> + return xc_hvm_build(xch, domid, args);
>> +}
>> +
>> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
>> + uint32_t domid,
>> + int memsize,
>> + int target,
>> + const char *image_name,
>> + uint64_t mmio_hole_size)
>> +{
>> struct xc_hvm_build_args args = {};
>>
>> memset(&args, 0, sizeof(struct xc_hvm_build_args));
>> args.mem_size = (uint64_t)memsize << 20;
>> args.mem_target = (uint64_t)target << 20;
>> args.image_file_name = image_name;
>> -
>> - return xc_hvm_build(xch, domid, &args);
>> + return xc_hvm_build_with_hole(xch, domid, &args, mmio_hole_size);
>> }
>>
>> /*
>> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
>> index 40bbac8..d35f38d 100644
>> --- a/tools/libxc/xenguest.h
>> +++ b/tools/libxc/xenguest.h
>> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
>> int xc_hvm_build(xc_interface *xch, uint32_t domid,
>> struct xc_hvm_build_args *hvm_args);
>>
>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>> + struct xc_hvm_build_args *args,
>> + uint64_t mmio_hole_size);
>> +
>> int xc_hvm_build_target_mem(xc_interface *xch,
>> uint32_t domid,
>> int memsize,
>> int target,
>> const char *image_name);
>>
>> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
>> + uint32_t domid,
>> + int memsize,
>> + int target,
>> + const char *image_name,
>> + uint64_t mmio_hole_size);
> Why on earth do we need all of these extra functions? Particularly
> ones like xc_hvm_build_target_mem_with_hole(), which isn't even called
> by anyone, AFAICT?
int xc_hvm_build_target_mem(xc_interface *xch,
uint32_t domid,
int memsize,
int target,
const char *image_name)
{
return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
image_name, 0);
}
So it is called...
>
> libxc isn't a stable interface -- can't we just have the callers set
> mmio_size in hvm_args and have them call xc_hvm_build()?
I am looking into this. I have was not sure that I could change or
drop xc_hvm_build_target_mem().
>> @@ -696,10 +698,28 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>> /* Switching here to the machine "pc" which does not add
>> * the xen-platform device instead of the default "xenfv" machine.
>> */
>> - flexarray_append(dm_args, "pc,accel=xen");
>> + machinearg = libxl__sprintf(gc, "pc,accel=xen");
>> } else {
>> - flexarray_append(dm_args, "xenfv");
>> + machinearg = libxl__sprintf(gc, "xenfv");
>> }
>> + if (b_info->u.hvm.mmio_hole_size) {
>> + uint64_t max_ram_below_4g = (1ULL << 32) -
>> + b_info->u.hvm.mmio_hole_size;
>> +
>> + if (max_ram_below_4g > HVM_BELOW_4G_MMIO_START) {
>> + LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
>> + "max-ram-below-4g=%"PRIu64
>> + " (mmio_hole_size=%"PRIu64
>> + ") too big > %u ignored.\n",
>> + max_ram_below_4g,
>> + b_info->u.hvm.mmio_hole_size,
>> + HVM_BELOW_4G_MMIO_START);
>> + } else {
>> + machinearg = libxl__sprintf(gc, "%s,max-ram-below-4g=%"PRIu64,
>> + machinearg, max_ram_below_4g);
>> + }
>> + }
> What happens if the version of qemu that's installed doesn't support
> max-ram-below-4g?
QEMU will report an error and abort. Guest will not start.
-Don Slutz
> -George
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-10-01 16:33 ` Don Slutz
@ 2014-10-02 7:36 ` Jan Beulich
2014-10-07 11:25 ` George Dunlap
0 siblings, 1 reply; 14+ messages in thread
From: Jan Beulich @ 2014-10-02 7:36 UTC (permalink / raw)
To: Don Slutz
Cc: Ian Campbell, Stefano Stabellini, George Dunlap, Ian Jackson,
xen-devel@lists.xen.org, Boris Ostrovsky
>>> On 01.10.14 at 18:33, <dslutz@verizon.com> wrote:
> On 09/30/14 09:15, George Dunlap wrote:
>> On Thu, Sep 11, 2014 at 5:20 PM, Don Slutz <dslutz@verizon.com> wrote:
>>> @@ -237,26 +243,49 @@ void pci_setup(void)
>>> pci_writew(devfn, PCI_COMMAND, cmd);
>>> }
>>>
>>> - /*
>>> - * At the moment qemu-xen can't deal with relocated memory regions.
>>> - * It's too close to the release to make a proper fix; for now,
>>> - * only allow the MMIO hole to grow large enough to move guest memory
>>> - * if we're running qemu-traditional. Items that don't fit will be
>>> - * relocated into the 64-bit address space.
>>> - *
>>> - * This loop now does the following:
>>> - * - If allow_memory_relocate, increase the MMIO hole until it's
>>> - * big enough, or until it's 2GiB
>>> - * - If !allow_memory_relocate, increase the MMIO hole until it's
>>> - * big enough, or until it's 2GiB, or until it overlaps guest
>>> - * memory
>>> - */
>>> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>> - && ((pci_mem_start << 1) != 0)
>>> - && (allow_memory_relocate
>>> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>> - >= hvm_info->low_mem_pgend)) )
>>> - pci_mem_start <<= 1;
>>> + if ( mmio_hole_size )
>>> + {
>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>> +
>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>> + {
>>> + printf("max_ram_below_4g=0x"PRIllx
>>> + " too big for mmio_hole_size=0x"PRIllx
>>> + " has been ignored.\n",
>>> + PRIllx_arg(max_ram_below_4g),
>>> + PRIllx_arg(mmio_hole_size));
>>> + }
>>> + else
>>> + {
>>> + pci_mem_start = max_ram_below_4g;
>>> + printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
>>> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
>>> + (long)mmio_hole_size);
>>> + }
>>> + }
>>> + else
>>> + {
>>> + /*
>>> + * At the moment qemu-xen can't deal with relocated memory regions.
>>> + * It's too close to the release to make a proper fix; for now,
>>> + * only allow the MMIO hole to grow large enough to move guest memory
>>> + * if we're running qemu-traditional. Items that don't fit will be
>>> + * relocated into the 64-bit address space.
>>> + *
>>> + * This loop now does the following:
>>> + * - If allow_memory_relocate, increase the MMIO hole until it's
>>> + * big enough, or until it's 2GiB
>>> + * - If !allow_memory_relocate, increase the MMIO hole until it's
>>> + * big enough, or until it's 2GiB, or until it overlaps guest
>>> + * memory
>>> + */
>>> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>> + && ((pci_mem_start << 1) != 0)
>>> + && (allow_memory_relocate
>>> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>> + >= hvm_info->low_mem_pgend)) )
>>> + pci_mem_start <<= 1;
>>> + }
>> I don't think these need to be disjoint. There's no reason you
>> couldn't set the size for the default, and then allow the code to make
>> it bigger for guests which allow that.
>
> The support for changing mmio_hole_size is still "missing" from QEMU.
> So this code only works for qemu-traditional. I think Jan said
> back on v1 or v2 (sorry, e-mail issues) that since this is a config,
> disable the auto changing code.
Because it didn't seem like you would want to properly take care
of both cases together (iirc the fact that the configured hole size
could be other than a power of 2 introduced a conflict with the
current resizing logic). I.e. doing one or the other is a suitable
first step imo, but with room for improvement.
>>> --- a/tools/libxc/xenguest.h
>>> +++ b/tools/libxc/xenguest.h
>>> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
>>> int xc_hvm_build(xc_interface *xch, uint32_t domid,
>>> struct xc_hvm_build_args *hvm_args);
>>>
>>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>>> + struct xc_hvm_build_args *args,
>>> + uint64_t mmio_hole_size);
>>> +
>>> int xc_hvm_build_target_mem(xc_interface *xch,
>>> uint32_t domid,
>>> int memsize,
>>> int target,
>>> const char *image_name);
>>>
>>> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
>>> + uint32_t domid,
>>> + int memsize,
>>> + int target,
>>> + const char *image_name,
>>> + uint64_t mmio_hole_size);
>> Why on earth do we need all of these extra functions? Particularly
>> ones like xc_hvm_build_target_mem_with_hole(), which isn't even called
>> by anyone, AFAICT?
>
> int xc_hvm_build_target_mem(xc_interface *xch,
> uint32_t domid,
> int memsize,
> int target,
> const char *image_name)
> {
> return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
> image_name, 0);
> }
>
> So it is called...
You're kidding, aren't you? If this is the only caller, we can very well
do without the ..._with_hole() one.
Jan
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 1/1] Add mmio_hole_size
2014-10-02 7:36 ` Jan Beulich
@ 2014-10-07 11:25 ` George Dunlap
0 siblings, 0 replies; 14+ messages in thread
From: George Dunlap @ 2014-10-07 11:25 UTC (permalink / raw)
To: Jan Beulich, Don Slutz
Cc: Boris Ostrovsky, xen-devel@lists.xen.org, Ian Jackson,
Ian Campbell, Stefano Stabellini
On 10/02/2014 08:36 AM, Jan Beulich wrote:
>>>> On 01.10.14 at 18:33, <dslutz@verizon.com> wrote:
>> On 09/30/14 09:15, George Dunlap wrote:
>>> On Thu, Sep 11, 2014 at 5:20 PM, Don Slutz <dslutz@verizon.com> wrote:
>>>> @@ -237,26 +243,49 @@ void pci_setup(void)
>>>> pci_writew(devfn, PCI_COMMAND, cmd);
>>>> }
>>>>
>>>> - /*
>>>> - * At the moment qemu-xen can't deal with relocated memory regions.
>>>> - * It's too close to the release to make a proper fix; for now,
>>>> - * only allow the MMIO hole to grow large enough to move guest memory
>>>> - * if we're running qemu-traditional. Items that don't fit will be
>>>> - * relocated into the 64-bit address space.
>>>> - *
>>>> - * This loop now does the following:
>>>> - * - If allow_memory_relocate, increase the MMIO hole until it's
>>>> - * big enough, or until it's 2GiB
>>>> - * - If !allow_memory_relocate, increase the MMIO hole until it's
>>>> - * big enough, or until it's 2GiB, or until it overlaps guest
>>>> - * memory
>>>> - */
>>>> - while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>>> - && ((pci_mem_start << 1) != 0)
>>>> - && (allow_memory_relocate
>>>> - || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>>> - >= hvm_info->low_mem_pgend)) )
>>>> - pci_mem_start <<= 1;
>>>> + if ( mmio_hole_size )
>>>> + {
>>>> + uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>>> +
>>>> + if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>>> + {
>>>> + printf("max_ram_below_4g=0x"PRIllx
>>>> + " too big for mmio_hole_size=0x"PRIllx
>>>> + " has been ignored.\n",
>>>> + PRIllx_arg(max_ram_below_4g),
>>>> + PRIllx_arg(mmio_hole_size));
>>>> + }
>>>> + else
>>>> + {
>>>> + pci_mem_start = max_ram_below_4g;
>>>> + printf("pci_mem_start=0x%lx (was 0x%x) for mmio_hole_size=%lu\n",
>>>> + pci_mem_start, HVM_BELOW_4G_MMIO_START,
>>>> + (long)mmio_hole_size);
>>>> + }
>>>> + }
>>>> + else
>>>> + {
>>>> + /*
>>>> + * At the moment qemu-xen can't deal with relocated memory regions.
>>>> + * It's too close to the release to make a proper fix; for now,
>>>> + * only allow the MMIO hole to grow large enough to move guest memory
>>>> + * if we're running qemu-traditional. Items that don't fit will be
>>>> + * relocated into the 64-bit address space.
>>>> + *
>>>> + * This loop now does the following:
>>>> + * - If allow_memory_relocate, increase the MMIO hole until it's
>>>> + * big enough, or until it's 2GiB
>>>> + * - If !allow_memory_relocate, increase the MMIO hole until it's
>>>> + * big enough, or until it's 2GiB, or until it overlaps guest
>>>> + * memory
>>>> + */
>>>> + while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>>> + && ((pci_mem_start << 1) != 0)
>>>> + && (allow_memory_relocate
>>>> + || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>>> + >= hvm_info->low_mem_pgend)) )
>>>> + pci_mem_start <<= 1;
>>>> + }
>>> I don't think these need to be disjoint. There's no reason you
>>> couldn't set the size for the default, and then allow the code to make
>>> it bigger for guests which allow that.
>> The support for changing mmio_hole_size is still "missing" from QEMU.
>> So this code only works for qemu-traditional. I think Jan said
>> back on v1 or v2 (sorry, e-mail issues) that since this is a config,
>> disable the auto changing code.
> Because it didn't seem like you would want to properly take care
> of both cases together (iirc the fact that the configured hole size
> could be other than a power of 2 introduced a conflict with the
> current resizing logic). I.e. doing one or the other is a suitable
> first step imo, but with room for improvement.
Right -- I hadn't thought about the power of 2 thing for the current
resizing logic. That will take some actual fixing, so it's probably
best if we put that off until later.
>
>>>> --- a/tools/libxc/xenguest.h
>>>> +++ b/tools/libxc/xenguest.h
>>>> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
>>>> int xc_hvm_build(xc_interface *xch, uint32_t domid,
>>>> struct xc_hvm_build_args *hvm_args);
>>>>
>>>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>>>> + struct xc_hvm_build_args *args,
>>>> + uint64_t mmio_hole_size);
>>>> +
>>>> int xc_hvm_build_target_mem(xc_interface *xch,
>>>> uint32_t domid,
>>>> int memsize,
>>>> int target,
>>>> const char *image_name);
>>>>
>>>> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
>>>> + uint32_t domid,
>>>> + int memsize,
>>>> + int target,
>>>> + const char *image_name,
>>>> + uint64_t mmio_hole_size);
>>> Why on earth do we need all of these extra functions? Particularly
>>> ones like xc_hvm_build_target_mem_with_hole(), which isn't even called
>>> by anyone, AFAICT?
>> int xc_hvm_build_target_mem(xc_interface *xch,
>> uint32_t domid,
>> int memsize,
>> int target,
>> const char *image_name)
>> {
>> return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
>> image_name, 0);
>> }
>>
>> So it is called...
> You're kidding, aren't you? If this is the only caller, we can very well
> do without the ..._with_hole() one.
Indeed, I didn't even get what Don was saying at first. When I looked,
xc_hvm_build* was called exactly twice: xc_hvm_build() was called in
libxl, and xc_hvm_build_target_mem() was called by xend. I think
originally those functions were there for xapi compatibility (I was
working on a XenServer feature when I wrote these; and this was before
libxl was really a thing).
But now that we have libxl, breaking libxc compatibility is a feature to
push people to use the libxl interfaces instead. :-)
-George
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2014-10-07 11:25 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-11 16:20 [PATCH v5 0/1] Add mmio_hole_size (was Add pci_hole_min_size) Don Slutz
2014-09-11 16:20 ` [PATCH v5 1/1] Add mmio_hole_size Don Slutz
2014-09-30 1:27 ` Boris Ostrovsky
2014-09-30 13:22 ` George Dunlap
2014-09-30 15:36 ` Boris Ostrovsky
2014-09-30 16:06 ` George Dunlap
2014-10-01 9:31 ` Slutz, Donald Christopher
2014-10-01 9:11 ` Slutz, Donald Christopher
2014-10-01 9:45 ` George Dunlap
2014-10-01 9:39 ` Slutz, Donald Christopher
2014-09-30 13:15 ` George Dunlap
2014-10-01 16:33 ` Don Slutz
2014-10-02 7:36 ` Jan Beulich
2014-10-07 11:25 ` George Dunlap
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).