* [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64
@ 2015-12-02 20:04 Michael S. Tsirkin
2015-12-02 20:26 ` Greg Kurz
2015-12-02 20:37 ` Rik van Riel
0 siblings, 2 replies; 3+ messages in thread
From: Michael S. Tsirkin @ 2015-12-02 20:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Greg Kurz
Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of
RAM", it is no longer possible to back guest RAM with hugepages on ppc64
hosts:
mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x3fff57000000
mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED, 19, 0) = -1 EBUSY (Device or resource busy)
This is because on ppc64, Linux fixes a page size for a virtual address
at mmap time, so we can't switch a range of memory from anonymous
small pages to hugetlbs with MAP_FIXED.
See commit d0f13e3c20b6fb73ccb467bdca97fa7cf5a574cd
("[POWERPC] Introduce address space "slices"") in Linux
history for the details.
Detect this and create the PROT_NONE mapping using the same fd.
Naturally, this makes the guard page bigger with hugetlbfs.
Based on patch by Greg Kurz.
Cc: Rik van Riel <riel@redhat.com>
CC: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
Since v1:
typo fixes.
include/qemu/mmap-alloc.h | 2 ++
util/mmap-alloc.c | 39 +++++++++++++++++++++++++++++++++++++++
util/oslib-posix.c | 24 +-----------------------
3 files changed, 42 insertions(+), 23 deletions(-)
diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
index 56388e6..0899b2f 100644
--- a/include/qemu/mmap-alloc.h
+++ b/include/qemu/mmap-alloc.h
@@ -3,6 +3,8 @@
#include "qemu-common.h"
+size_t qemu_fd_getpagesize(int fd);
+
void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared);
void qemu_ram_munmap(void *ptr, size_t size);
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index c37acbe..54793a5 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -14,6 +14,32 @@
#include <sys/mman.h>
#include <assert.h>
+#define HUGETLBFS_MAGIC 0x958458f6
+
+#ifdef CONFIG_LINUX
+#include <sys/vfs.h>
+#endif
+
+size_t qemu_fd_getpagesize(int fd)
+{
+#ifdef CONFIG_LINUX
+ struct statfs fs;
+ int ret;
+
+ if (fd != -1) {
+ do {
+ ret = fstatfs(fd, &fs);
+ } while (ret != 0 && errno == EINTR);
+
+ if (ret == 0 && fs.f_type == HUGETLBFS_MAGIC) {
+ return fs.f_bsize;
+ }
+ }
+#endif
+
+ return getpagesize();
+}
+
void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)
{
/*
@@ -21,7 +47,20 @@ void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)
* space, even if size is already aligned.
*/
size_t total = size + align;
+#if defined(__powerpc64__) && defined(__linux__)
+ /* On ppc64 mappings in the same segment (aka slice) must share the same
+ * page size. Since we will be re-allocating part of this segment
+ * from the supplied fd, we should make sure to use the same page size,
+ * unless we are using the system page size, in which case anonymous memory
+ * is OK. Use align as a hint for the page size.
+ * In this case, set MAP_NORESERVE to avoid allocating backing store memory.
+ */
+ int anonfd = fd == -1 || qemu_fd_getpagesize(fd) == getpagesize() ? -1 : fd;
+ int flags = anonfd == -1 ? MAP_ANONYMOUS : MAP_NORESERVE;
+ void *ptr = mmap(0, total, PROT_NONE, flags | MAP_PRIVATE, anonfd, 0);
+#else
void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+#endif
size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
void *ptr1;
diff --git a/util/oslib-posix.c b/util/oslib-posix.c
index 914cef5..d25f671 100644
--- a/util/oslib-posix.c
+++ b/util/oslib-posix.c
@@ -46,7 +46,6 @@ extern int daemon(int, int);
#else
# define QEMU_VMALLOC_ALIGN getpagesize()
#endif
-#define HUGETLBFS_MAGIC 0x958458f6
#include <termios.h>
#include <unistd.h>
@@ -65,7 +64,6 @@ extern int daemon(int, int);
#ifdef CONFIG_LINUX
#include <sys/syscall.h>
-#include <sys/vfs.h>
#endif
#ifdef __FreeBSD__
@@ -340,26 +338,6 @@ static void sigbus_handler(int signal)
siglongjmp(sigjump, 1);
}
-static size_t fd_getpagesize(int fd)
-{
-#ifdef CONFIG_LINUX
- struct statfs fs;
- int ret;
-
- if (fd != -1) {
- do {
- ret = fstatfs(fd, &fs);
- } while (ret != 0 && errno == EINTR);
-
- if (ret == 0 && fs.f_type == HUGETLBFS_MAGIC) {
- return fs.f_bsize;
- }
- }
-#endif
-
- return getpagesize();
-}
-
void os_mem_prealloc(int fd, char *area, size_t memory)
{
int ret;
@@ -387,7 +365,7 @@ void os_mem_prealloc(int fd, char *area, size_t memory)
exit(1);
} else {
int i;
- size_t hpagesize = fd_getpagesize(fd);
+ size_t hpagesize = qemu_fd_getpagesize(fd);
size_t numpages = DIV_ROUND_UP(memory, hpagesize);
/* MAP_POPULATE silently ignores failures */
--
MST
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64
2015-12-02 20:04 [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64 Michael S. Tsirkin
@ 2015-12-02 20:26 ` Greg Kurz
2015-12-02 20:37 ` Rik van Riel
1 sibling, 0 replies; 3+ messages in thread
From: Greg Kurz @ 2015-12-02 20:26 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Kevin Wolf, Paolo Bonzini, qemu-devel
On Wed, 2 Dec 2015 22:04:53 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of
> RAM", it is no longer possible to back guest RAM with hugepages on ppc64
> hosts:
>
> mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x3fff57000000
> mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED, 19, 0) = -1 EBUSY (Device or resource busy)
>
> This is because on ppc64, Linux fixes a page size for a virtual address
> at mmap time, so we can't switch a range of memory from anonymous
> small pages to hugetlbs with MAP_FIXED.
>
> See commit d0f13e3c20b6fb73ccb467bdca97fa7cf5a574cd
> ("[POWERPC] Introduce address space "slices"") in Linux
> history for the details.
>
> Detect this and create the PROT_NONE mapping using the same fd.
>
> Naturally, this makes the guard page bigger with hugetlbfs.
>
> Based on patch by Greg Kurz.
>
> Cc: Rik van Riel <riel@redhat.com>
> CC: Greg Kurz <gkurz@linux.vnet.ibm.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>
It works !
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Tested-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
> Since v1:
> typo fixes.
>
> include/qemu/mmap-alloc.h | 2 ++
> util/mmap-alloc.c | 39 +++++++++++++++++++++++++++++++++++++++
> util/oslib-posix.c | 24 +-----------------------
> 3 files changed, 42 insertions(+), 23 deletions(-)
>
> diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
> index 56388e6..0899b2f 100644
> --- a/include/qemu/mmap-alloc.h
> +++ b/include/qemu/mmap-alloc.h
> @@ -3,6 +3,8 @@
>
> #include "qemu-common.h"
>
> +size_t qemu_fd_getpagesize(int fd);
> +
> void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared);
>
> void qemu_ram_munmap(void *ptr, size_t size);
> diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
> index c37acbe..54793a5 100644
> --- a/util/mmap-alloc.c
> +++ b/util/mmap-alloc.c
> @@ -14,6 +14,32 @@
> #include <sys/mman.h>
> #include <assert.h>
>
> +#define HUGETLBFS_MAGIC 0x958458f6
> +
> +#ifdef CONFIG_LINUX
> +#include <sys/vfs.h>
> +#endif
> +
> +size_t qemu_fd_getpagesize(int fd)
> +{
> +#ifdef CONFIG_LINUX
> + struct statfs fs;
> + int ret;
> +
> + if (fd != -1) {
> + do {
> + ret = fstatfs(fd, &fs);
> + } while (ret != 0 && errno == EINTR);
> +
> + if (ret == 0 && fs.f_type == HUGETLBFS_MAGIC) {
> + return fs.f_bsize;
> + }
> + }
> +#endif
> +
> + return getpagesize();
> +}
> +
> void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)
> {
> /*
> @@ -21,7 +47,20 @@ void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)
> * space, even if size is already aligned.
> */
> size_t total = size + align;
> +#if defined(__powerpc64__) && defined(__linux__)
> + /* On ppc64 mappings in the same segment (aka slice) must share the same
> + * page size. Since we will be re-allocating part of this segment
> + * from the supplied fd, we should make sure to use the same page size,
> + * unless we are using the system page size, in which case anonymous memory
> + * is OK. Use align as a hint for the page size.
> + * In this case, set MAP_NORESERVE to avoid allocating backing store memory.
> + */
> + int anonfd = fd == -1 || qemu_fd_getpagesize(fd) == getpagesize() ? -1 : fd;
> + int flags = anonfd == -1 ? MAP_ANONYMOUS : MAP_NORESERVE;
> + void *ptr = mmap(0, total, PROT_NONE, flags | MAP_PRIVATE, anonfd, 0);
> +#else
> void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> +#endif
> size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
> void *ptr1;
>
> diff --git a/util/oslib-posix.c b/util/oslib-posix.c
> index 914cef5..d25f671 100644
> --- a/util/oslib-posix.c
> +++ b/util/oslib-posix.c
> @@ -46,7 +46,6 @@ extern int daemon(int, int);
> #else
> # define QEMU_VMALLOC_ALIGN getpagesize()
> #endif
> -#define HUGETLBFS_MAGIC 0x958458f6
>
> #include <termios.h>
> #include <unistd.h>
> @@ -65,7 +64,6 @@ extern int daemon(int, int);
>
> #ifdef CONFIG_LINUX
> #include <sys/syscall.h>
> -#include <sys/vfs.h>
> #endif
>
> #ifdef __FreeBSD__
> @@ -340,26 +338,6 @@ static void sigbus_handler(int signal)
> siglongjmp(sigjump, 1);
> }
>
> -static size_t fd_getpagesize(int fd)
> -{
> -#ifdef CONFIG_LINUX
> - struct statfs fs;
> - int ret;
> -
> - if (fd != -1) {
> - do {
> - ret = fstatfs(fd, &fs);
> - } while (ret != 0 && errno == EINTR);
> -
> - if (ret == 0 && fs.f_type == HUGETLBFS_MAGIC) {
> - return fs.f_bsize;
> - }
> - }
> -#endif
> -
> - return getpagesize();
> -}
> -
> void os_mem_prealloc(int fd, char *area, size_t memory)
> {
> int ret;
> @@ -387,7 +365,7 @@ void os_mem_prealloc(int fd, char *area, size_t memory)
> exit(1);
> } else {
> int i;
> - size_t hpagesize = fd_getpagesize(fd);
> + size_t hpagesize = qemu_fd_getpagesize(fd);
> size_t numpages = DIV_ROUND_UP(memory, hpagesize);
>
> /* MAP_POPULATE silently ignores failures */
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64
2015-12-02 20:04 [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64 Michael S. Tsirkin
2015-12-02 20:26 ` Greg Kurz
@ 2015-12-02 20:37 ` Rik van Riel
1 sibling, 0 replies; 3+ messages in thread
From: Rik van Riel @ 2015-12-02 20:37 UTC (permalink / raw)
To: Michael S. Tsirkin, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Greg Kurz
On 12/02/2015 03:04 PM, Michael S. Tsirkin wrote:
> Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of
> RAM", it is no longer possible to back guest RAM with hugepages on ppc64
> hosts:
>
> mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x3fff57000000
> mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED, 19, 0) = -1 EBUSY (Device or resource busy)
>
> This is because on ppc64, Linux fixes a page size for a virtual address
> at mmap time, so we can't switch a range of memory from anonymous
> small pages to hugetlbs with MAP_FIXED.
>
> See commit d0f13e3c20b6fb73ccb467bdca97fa7cf5a574cd
> ("[POWERPC] Introduce address space "slices"") in Linux
> history for the details.
>
> Detect this and create the PROT_NONE mapping using the same fd.
>
> Naturally, this makes the guard page bigger with hugetlbfs.
>
> Based on patch by Greg Kurz.
>
> Cc: Rik van Riel <riel@redhat.com>
> CC: Greg Kurz <gkurz@linux.vnet.ibm.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-12-02 20:37 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-02 20:04 [Qemu-devel] [PATCH v2] util/mmap-alloc: fix hugetlb support on ppc64 Michael S. Tsirkin
2015-12-02 20:26 ` Greg Kurz
2015-12-02 20:37 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).