From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
sstabellini@kernel.org, wei.liu2@citrix.com,
George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
ian.jackson@eu.citrix.com, tim@xen.org, jbeulich@suse.com
Subject: [PATCH 2/2] xen/x86: add a way to obtain the needed number of memory map entries
Date: Mon, 5 Dec 2016 17:34:09 +0100 [thread overview]
Message-ID: <20161205163409.16714-3-jgross@suse.com> (raw)
In-Reply-To: <20161205163409.16714-1-jgross@suse.com>
Today there is no way for a domain to obtain the number of entries of
the machine memory map returned by XENMEM_machine_memory_map hypercall.
Modify the interface to return just the needed number of map entries
in case the buffer was specified as NULL.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
xen/arch/x86/mm.c | 38 +++++++++++++++++++++++---------------
xen/include/public/memory.h | 2 ++
2 files changed, 25 insertions(+), 15 deletions(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f8e679d..d384022 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4736,15 +4736,18 @@ static int _handle_iomem_range(unsigned long s, unsigned long e,
XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param;
XEN_GUEST_HANDLE(e820entry_t) buffer;
- if ( ctxt->n + 1 >= ctxt->map.nr_entries )
- return -E2BIG;
- ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
- ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
- ent.type = E820_RESERVED;
- buffer_param = guest_handle_cast(ctxt->map.buffer, e820entry_t);
- buffer = guest_handle_from_param(buffer_param, e820entry_t);
- if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
- return -EFAULT;
+ if ( !guest_handle_is_null(ctxt->map.buffer) )
+ {
+ if ( ctxt->n + 1 >= ctxt->map.nr_entries )
+ return -E2BIG;
+ ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
+ ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
+ ent.type = E820_RESERVED;
+ buffer_param = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+ buffer = guest_handle_from_param(buffer_param, e820entry_t);
+ if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
+ return -EFAULT;
+ }
ctxt->n++;
}
ctxt->s = e + 1;
@@ -4978,6 +4981,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
XEN_GUEST_HANDLE(e820entry_t) buffer;
XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param;
unsigned int i;
+ bool store;
rc = xsm_machine_memory_map(XSM_PRIV);
if ( rc )
@@ -4986,9 +4990,10 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
if ( copy_from_guest(&ctxt.map, arg, 1) )
return -EFAULT;
+ store = !guest_handle_is_null(ctxt.map.buffer);
buffer_param = guest_handle_cast(ctxt.map.buffer, e820entry_t);
buffer = guest_handle_from_param(buffer_param, e820entry_t);
- if ( !guest_handle_okay(buffer, ctxt.map.nr_entries) )
+ if ( store && !guest_handle_okay(buffer, ctxt.map.nr_entries) )
return -EFAULT;
for ( i = 0, ctxt.n = 0, ctxt.s = 0; i < e820.nr_map; ++i, ++ctxt.n )
@@ -5005,13 +5010,16 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
if ( rc )
break;
}
- if ( ctxt.map.nr_entries <= ctxt.n + 1 )
+ if ( store )
{
- rc = -E2BIG;
- break;
+ if ( ctxt.map.nr_entries <= ctxt.n + 1 )
+ {
+ rc = -E2BIG;
+ break;
+ }
+ if ( __copy_to_guest_offset(buffer, ctxt.n, e820.map + i, 1) )
+ return -EFAULT;
}
- if ( __copy_to_guest_offset(buffer, ctxt.n, e820.map + i, 1) )
- return -EFAULT;
ctxt.s = PFN_UP(e820.map[i].addr + e820.map[i].size);
}
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 20df769..2a61e11 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -341,6 +341,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_memory_map_t);
* XENMEM_memory_map.
* In case of a buffer not capable to hold all entries of the physical
* memory map -E2BIG is returned and the buffer is filled completely.
+ * Specifying buffer as NULL will return the number of entries required
+ * to store the complete memory map.
* arg == addr of xen_memory_map_t.
*/
#define XENMEM_machine_memory_map 10
--
2.10.2
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-12-05 16:34 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-05 16:34 [PATCH 0/2] xen: modify dom0 interface for obtaining memory map Juergen Gross
2016-12-05 16:34 ` [PATCH 1/2] xen/x86: return partial memory map in case of not enough space Juergen Gross
2016-12-05 17:17 ` Jan Beulich
[not found] ` <5845AF3E020000780012541F@suse.com>
2016-12-06 7:43 ` Juergen Gross
2016-12-06 8:15 ` Jan Beulich
[not found] ` <584681CA020000780012577A@suse.com>
2016-12-06 8:33 ` Juergen Gross
2016-12-06 8:51 ` Jan Beulich
[not found] ` <58468A1202000078001257BE@suse.com>
2016-12-06 9:44 ` Juergen Gross
2016-12-06 9:51 ` Jan Beulich
2016-12-05 16:34 ` Juergen Gross [this message]
2016-12-05 16:39 ` [PATCH 0/2] xen: modify dom0 interface for obtaining memory map Andrew Cooper
2016-12-05 16:43 ` Juergen Gross
2016-12-05 17:06 ` Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161205163409.16714-3-jgross@suse.com \
--to=jgross@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).