From: Greg Kurz <gkurz@linux.vnet.ibm.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings
Date: Mon, 30 Nov 2015 14:46:31 +0100 [thread overview]
Message-ID: <20151130144631.4736280b@bahia.local> (raw)
In-Reply-To: <20151130150353-mutt-send-email-mst@redhat.com>
On Mon, 30 Nov 2015 15:06:33 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Mon, Nov 30, 2015 at 11:51:57AM +0100, Greg Kurz wrote:
> > Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of RAM",
> > it is no longer possible to back guest RAM with hugepages on ppc64 hosts:
> >
> > mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x3fff57000000
> > mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 19, 0) = -1 EBUSY (Device or resource busy)
> >
> > This is due to a limitation on ppc64 that requires MAP_FIXED mappings to have
> > the same page size as other mappings already present in the same "slice" of
> > virtual address space (Cc'ing Ben for details).
>
> I'd like some details please.
> What do you mean when you say "same page size" and "slice"?
>
On ppc64, the address space is divided in 256MB-sized segments where all pages
have the same size. This is a hw limitation IIUC. I don't know if it can be
fixed and I'll let Ben comment on it.
Hugepage support is implemented using an abstraction of segments called
"slices". Here's a quote from the related commit changelog in the kernel
tree:
commit d0f13e3c20b6fb73ccb467bdca97fa7cf5a574cd
Author: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Date: Tue May 8 16:27:27 2007 +1000
[POWERPC] Introduce address space "slices"
...
The main issues are:
- To maintain/keep track of the page size per "segment" (as we can
only have one page size per segment on powerpc, which are 256MB
divisions of the address space).
- To make sure special mappings stay within their allotted
"segments" (including MAP_FIXED crap)
- To make sure everybody else doesn't mmap/brk/grow_stack into a
"segment" that is used for a special mapping
...
> > This is exactly what happens
> > when calling mmap() above: first one uses native host page size (64k) and
> > second one uses huge page size (16M).
> >
> > To be sure we always have the same page size, let's use the same backend for
> > both calls to mmap(): this is enough to fix the ppc64 issue.
> >
> > This has no effect on RAM based mappings.
> >
> > Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
> > ---
> >
> > This is a bug fix for 2.5
> >
> > util/mmap-alloc.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
> > index c37acbe58ede..0ff221dd94f4 100644
> > --- a/util/mmap-alloc.c
> > +++ b/util/mmap-alloc.c
> > @@ -21,7 +21,8 @@ void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)
> > * space, even if size is already aligned.
> > */
> > size_t total = size + align;
> > - void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + void *ptr = mmap(0, total, PROT_NONE,
> > + (fd == -1 ? MAP_ANONYMOUS : 0) | MAP_PRIVATE, fd, 0);
> > size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
> > void *ptr1;
> >
>
next prev parent reply other threads:[~2015-11-30 13:46 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-30 10:51 [Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings Greg Kurz
2015-11-30 10:53 ` Paolo Bonzini
2015-11-30 13:12 ` Michael S. Tsirkin
2015-12-01 10:42 ` Greg Kurz
2015-12-01 10:52 ` Michael S. Tsirkin
2015-11-30 13:06 ` Michael S. Tsirkin
2015-11-30 13:46 ` Greg Kurz [this message]
2015-11-30 16:59 ` Michael S. Tsirkin
2015-12-01 10:37 ` Greg Kurz
2015-12-01 10:53 ` Aneesh Kumar K.V
2015-12-01 10:57 ` Michael S. Tsirkin
2015-12-01 12:15 ` Aneesh Kumar K.V
2015-12-01 14:25 ` Michael S. Tsirkin
2015-12-01 13:31 ` Greg Kurz
2015-12-01 14:19 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151130144631.4736280b@bahia.local \
--to=gkurz@linux.vnet.ibm.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).