From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Chao Gao' <chao.gao@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
George Dunlap <George.Dunlap@citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Jan Beulich <jbeulich@suse.com>,
Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages
Date: Fri, 8 Dec 2017 11:06:43 +0000 [thread overview]
Message-ID: <646776360aa2466eabd8fb9bdcccd8dc@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <20171207065629.GA49036@op-computing>
[-- Attachment #1: Type: text/plain, Size: 5514 bytes --]
> -----Original Message-----
> From: Chao Gao [mailto:chao.gao@intel.com]
> Sent: 07 December 2017 06:57
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> <wei.liu2@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Tim
> (Xen.org) <tim@xen.org>; George Dunlap <George.Dunlap@citrix.com>;
> xen-devel@lists.xen.org; Jan Beulich <jbeulich@suse.com>; Ian Jackson
> <Ian.Jackson@citrix.com>
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
>
> On Thu, Dec 07, 2017 at 08:41:14AM +0000, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> Behalf
> >> Of Paul Durrant
> >> Sent: 06 December 2017 16:10
> >> To: 'Chao Gao' <chao.gao@intel.com>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> >> <wei.liu2@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> Tim
> >> (Xen.org) <tim@xen.org>; George Dunlap <George.Dunlap@citrix.com>;
> >> xen-devel@lists.xen.org; Jan Beulich <jbeulich@suse.com>; Ian Jackson
> >> <Ian.Jackson@citrix.com>
> >> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
> >> IOREQ page to 4 pages
> >>
> >> > -----Original Message-----
> >> > From: Chao Gao [mailto:chao.gao@intel.com]
> >> > Sent: 06 December 2017 09:02
> >> > To: Paul Durrant <Paul.Durrant@citrix.com>
> >> > Cc: xen-devel@lists.xen.org; Tim (Xen.org) <tim@xen.org>; Stefano
> >> > Stabellini <sstabellini@kernel.org>; Konrad Rzeszutek Wilk
> >> > <konrad.wilk@oracle.com>; Jan Beulich <jbeulich@suse.com>; George
> >> > Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> >> > <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Ian
> Jackson
> >> > <Ian.Jackson@citrix.com>
> >> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
> to 4
> >> > pages
> >> >
> >> > On Wed, Dec 06, 2017 at 03:04:11PM +0000, Paul Durrant wrote:
> >> > >> -----Original Message-----
> >> > >> From: Chao Gao [mailto:chao.gao@intel.com]
> >> > >> Sent: 06 December 2017 07:50
> >> > >> To: xen-devel@lists.xen.org
> >> > >> Cc: Chao Gao <chao.gao@intel.com>; Paul Durrant
> >> > >> <Paul.Durrant@citrix.com>; Tim (Xen.org) <tim@xen.org>; Stefano
> >> > Stabellini
> >> > >> <sstabellini@kernel.org>; Konrad Rzeszutek Wilk
> >> > >> <konrad.wilk@oracle.com>; Jan Beulich <jbeulich@suse.com>;
> George
> >> > >> Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> >> > >> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Ian
> >> > Jackson
> >> > >> <Ian.Jackson@citrix.com>
> >> > >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
> to 4
> >> > >> pages
> >> > >>
> >> > >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove
> the
> >> > vcpu
> >> > >> number constraint imposed by one IOREQ page, bump the number
> of
> >> > IOREQ
> >> > >> page to
> >> > >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
> >> > >>
> >> > >> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server
> to
> >> an
> >> > >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
> >> > >> FOR_EACH_IOREQ_PAGE macro.
> >> > >>
> >> > >> In order to access an IOREQ page, QEMU should get the gmfn and
> map
> >> > this
> >> > >> gmfn
> >> > >> to its virtual address space.
> >> > >
> >> > >No. There's no need to extend the 'legacy' mechanism of using magic
> >> page
> >> > gfns. You should only handle the case where the mfns are allocated on
> >> > demand (see the call to hvm_ioreq_server_alloc_pages() in
> >> > hvm_get_ioreq_server_frame()). The number of guest vcpus is known
> at
> >> > this point so the correct number of pages can be allocated. If the creator
> of
> >> > the ioreq server attempts to use the legacy
> hvm_get_ioreq_server_info()
> >> > and the guest has >128 vcpus then the call should fail.
> >> >
> >> > Great suggestion. I will introduce a new dmop, a variant of
> >> > hvm_get_ioreq_server_frame() for creator to get an array of gfns and
> the
> >> > size of array. And the legacy interface will report an error if more
> >> > than one IOREQ PAGES are needed.
> >>
> >> You don't need a new dmop for mapping I think. The mem op to map
> ioreq
> >> server frames should work. All you should need to do is update
> >> hvm_get_ioreq_server_frame() to deal with an index > 1, and provide
> some
> >> means for the ioreq server creator to convert the number of guest vcpus
> into
> >> the correct number of pages to map. (That might need a new dm op).
> >
> >I realise after saying this that an emulator already knows the size of the
> ioreq structure and so can easily calculate the correct number of pages to
> map, given the number of guest vcpus.
>
> How about the patch in the bottom? Is it in the right direction?
Yes, certainly along the right lines. I would probably do away with MAX_NR_IOREQ_PAGE though. You should just to dynamically allocate the correct number of ioreq pages when the ioreq server is created (since you already calculate nr_ioreq_page there anyway).
> Do you have the QEMU patch, which replaces the old method with the new
> method
> to set up mapping? I want to integrate that patch and do some tests.
Sure. There's a couple of patched. I have not tested them with recent rebases of my series so you may find some issues.
Cheers,
Paul
[-- Attachment #2: 0001-Separate-ioreq-server-mapping-code-from-general-init.patch --]
[-- Type: application/octet-stream, Size: 5266 bytes --]
From b162cc6d92bffe28efac38f5b9501a9c28b5be79 Mon Sep 17 00:00:00 2001
From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 10 Aug 2017 11:37:22 +0100
Subject: [PATCH 1/2] Separate ioreq server mapping code from general init
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
hw/i386/xen/xen-hvm.c | 81 ++++++++++++++++++++++++++++++---------------------
1 file changed, 48 insertions(+), 33 deletions(-)
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index d9ccd5d0d6..59e3122daf 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -95,7 +95,8 @@ typedef struct XenIOState {
CPUState **cpu_by_vcpu_id;
/* the evtchn port for polling the notification, */
evtchn_port_t *ioreq_local_port;
- /* evtchn local port for buffered io */
+ /* evtchn remote and local ports for buffered io */
+ evtchn_port_t bufioreq_remote_port;
evtchn_port_t bufioreq_local_port;
/* the evtchn fd for polling */
xenevtchn_handle *xce_handle;
@@ -1232,12 +1233,52 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
}
-void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
+static int xen_map_ioreq_server(XenIOState *state)
{
- int i, rc;
xen_pfn_t ioreq_pfn;
xen_pfn_t bufioreq_pfn;
evtchn_port_t bufioreq_evtchn;
+ int rc;
+
+ rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
+ &ioreq_pfn, &bufioreq_pfn,
+ &bufioreq_evtchn);
+ if (rc < 0) {
+ error_report("failed to get ioreq server info: error %d handle=%p",
+ errno, xen_xc);
+ return rc;
+ }
+
+ DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+ DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+ DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
+
+ state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+ PROT_READ|PROT_WRITE,
+ 1, &ioreq_pfn, NULL);
+ if (state->shared_page == NULL) {
+ error_report("map shared IO page returned error %d handle=%p",
+ errno, xen_xc);
+ return -1;
+ }
+
+ state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+ PROT_READ|PROT_WRITE,
+ 1, &bufioreq_pfn, NULL);
+ if (state->buffered_io_page == NULL) {
+ error_report("map buffered IO page returned error %d", errno);
+ return -1;
+ }
+
+ state->bufioreq_remote_port = bufioreq_evtchn;
+
+ return 0;
+}
+
+void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
+{
+ int i, rc;
+ xen_pfn_t ioreq_pfn;
XenIOState *state;
state = g_malloc0(sizeof (XenIOState));
@@ -1273,28 +1314,10 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
state->wakeup.notify = xen_wakeup_notifier;
qemu_register_wakeup_notifier(&state->wakeup);
- rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
- &ioreq_pfn, &bufioreq_pfn,
- &bufioreq_evtchn);
- if (rc < 0) {
- error_report("failed to get ioreq server info: error %d handle=%p",
- errno, xen_xc);
+ rc = xen_map_ioreq_server(state);
+ if (rc < 0)
goto err;
- }
-
- DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
- DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
- DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
-
- state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
- PROT_READ|PROT_WRITE,
- 1, &ioreq_pfn, NULL);
- if (state->shared_page == NULL) {
- error_report("map shared IO page returned error %d handle=%p",
- errno, xen_xc);
- goto err;
- }
-
+
rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
if (!rc) {
DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
@@ -1312,14 +1335,6 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
goto err;
}
- state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
- PROT_READ|PROT_WRITE,
- 1, &bufioreq_pfn, NULL);
- if (state->buffered_io_page == NULL) {
- error_report("map buffered IO page returned error %d", errno);
- goto err;
- }
-
/* Note: cpus is empty at this point in init */
state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
@@ -1344,7 +1359,7 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
}
rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
- bufioreq_evtchn);
+ state->bufioreq_remote_port);
if (rc == -1) {
error_report("buffered evtchn bind error %d", errno);
goto err;
--
2.11.0
[-- Attachment #3: 0002-use-new-interface.patch --]
[-- Type: application/octet-stream, Size: 5641 bytes --]
From 0afb65e74d4ba5b1fae47cd7fef65a6bdc859b57 Mon Sep 17 00:00:00 2001
From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 10 Aug 2017 11:38:01 +0100
Subject: [PATCH 2/2] use new interface
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
configure | 1 +
hw/i386/xen/xen-hvm.c | 62 +++++++++++++++++++++++++++++++++++----------
include/hw/xen/xen_common.h | 16 ++++++++++++
3 files changed, 66 insertions(+), 13 deletions(-)
diff --git a/configure b/configure
index dd73cce62f..55b67f9845 100755
--- a/configure
+++ b/configure
@@ -2114,6 +2114,7 @@ int main(void) {
xfmem = xenforeignmemory_open(0, 0);
xenforeignmemory_map2(xfmem, 0, 0, 0, 0, 0, 0, 0);
+ xenforeignmemory_map_resource(xfmem, 0, 0, 0, 0, 0, NULL, 0, 0);
return 0;
}
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 59e3122daf..0a8d1f6574 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1235,13 +1235,37 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
static int xen_map_ioreq_server(XenIOState *state)
{
+ void *addr = NULL;
+ xenforeignmemory_resource_handle *fres;
xen_pfn_t ioreq_pfn;
xen_pfn_t bufioreq_pfn;
evtchn_port_t bufioreq_evtchn;
int rc;
+
+ /*
+ * Attempt to map using the resource API and fall back to normal
+ * foreign mapping if this is not supported.
+ */
+ fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
+ XENMEM_resource_ioreq_server,
+ state->ioservid, 0, 2,
+ &addr,
+ PROT_READ|PROT_WRITE, 0);
+ if (fres != NULL) {
+ state->buffered_io_page = addr;
+ state->shared_page = addr + TARGET_PAGE_SIZE;
+ } else {
+ error_report("failed to map ioreq server resources: error %d handle=%p",
+ errno, xen_xc);
+ if (errno != EOPNOTSUPP)
+ return -1;
+ }
rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
- &ioreq_pfn, &bufioreq_pfn,
+ (state->shared_page == NULL) ?
+ &ioreq_pfn : NULL,
+ (state->buffered_io_page == NULL) ?
+ &bufioreq_pfn : NULL,
&bufioreq_evtchn);
if (rc < 0) {
error_report("failed to get ioreq server info: error %d handle=%p",
@@ -1249,24 +1273,36 @@ static int xen_map_ioreq_server(XenIOState *state)
return rc;
}
- DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
- DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+ if (state->shared_page == NULL)
+ DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+
+ if (state->buffered_io_page == NULL)
+ DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+
DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
- state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
- PROT_READ|PROT_WRITE,
- 1, &ioreq_pfn, NULL);
if (state->shared_page == NULL) {
- error_report("map shared IO page returned error %d handle=%p",
- errno, xen_xc);
- return -1;
+ state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+ PROT_READ|PROT_WRITE,
+ 1, &ioreq_pfn, NULL);
+ if (state->shared_page == NULL) {
+ error_report("map shared IO page returned error %d handle=%p",
+ errno, xen_xc);
+ }
}
- state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
- PROT_READ|PROT_WRITE,
- 1, &bufioreq_pfn, NULL);
if (state->buffered_io_page == NULL) {
- error_report("map buffered IO page returned error %d", errno);
+ state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+ PROT_READ|PROT_WRITE,
+ 1, &bufioreq_pfn,
+ NULL);
+ if (state->buffered_io_page == NULL) {
+ error_report("map buffered IO page returned error %d", errno);
+ return -1;
+ }
+ }
+
+ if (state->shared_page == NULL || state->buffered_io_page == NULL) {
return -1;
}
diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 86c7f26106..6c59ab56e9 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -91,6 +91,22 @@ static inline void *xenforeignmemory_map2(xenforeignmemory_handle *h,
return xenforeignmemory_map(h, dom, prot, pages, arr, err);
}
+typedef void xenforeignmemory_resource_handle;
+
+static inline xenforeignmemory_resource_handle *xenforeignmemory_map_resource(
+ xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+ unsigned int id, unsigned long frame, unsigned long nr_frames,
+ void **paddr, int prot, int flags)
+{
+ errno = EOPNOTSUPP;
+ return -1;
+}
+
+static inline void xenforeignmemory_unmap_resource(
+ xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
+{
+}
+
#endif
#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40900
--
2.11.0
[-- Attachment #4: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2017-12-08 11:06 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-06 7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44 ` Paul Durrant
2017-12-06 8:37 ` Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04 ` Paul Durrant
2017-12-06 9:02 ` Chao Gao
2017-12-06 16:10 ` Paul Durrant
2017-12-07 8:41 ` Paul Durrant
2017-12-07 6:56 ` Chao Gao
2017-12-08 11:06 ` Paul Durrant [this message]
2017-12-12 1:03 ` Chao Gao
2017-12-12 9:07 ` Paul Durrant
2017-12-12 23:39 ` Chao Gao
2017-12-13 10:49 ` Paul Durrant
2017-12-13 17:50 ` Paul Durrant
2017-12-14 14:50 ` Paul Durrant
2017-12-15 0:35 ` Chao Gao
2017-12-15 9:40 ` Paul Durrant
2018-04-18 8:19 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05 ` Wei Liu
2017-12-06 7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44 ` Wei Liu
2018-02-23 8:41 ` Jan Beulich
2018-02-23 16:42 ` Roger Pau Monné
2018-02-24 5:49 ` Chao Gao
2018-02-26 8:28 ` Jan Beulich
2018-02-26 12:33 ` Chao Gao
2018-02-26 14:19 ` Roger Pau Monné
2018-04-18 8:38 ` Jan Beulich
2018-04-18 11:20 ` Chao Gao
2018-04-18 11:50 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18 8:48 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17 ` George Dunlap
2018-04-18 8:53 ` Jan Beulich
2018-04-18 11:39 ` Chao Gao
2018-04-18 11:50 ` Andrew Cooper
2018-04-18 11:59 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46 ` Wei Liu
2018-02-23 8:50 ` Jan Beulich
2018-02-23 17:18 ` Wei Liu
2018-02-23 18:11 ` Roger Pau Monné
2018-02-24 6:26 ` Chao Gao
2018-02-26 8:26 ` Jan Beulich
2018-02-26 13:11 ` Chao Gao
2018-02-26 16:10 ` Jan Beulich
2018-03-01 5:21 ` Chao Gao
2018-03-01 7:17 ` Juergen Gross
2018-03-01 7:37 ` Jan Beulich
2018-03-01 7:11 ` Chao Gao
2018-02-27 14:59 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=646776360aa2466eabd8fb9bdcccd8dc@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=chao.gao@intel.com \
--cc=jbeulich@suse.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).