From: Alon Levy <alevy@redhat.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dave Airlie <airlied@redhat.com>,
qemu-devel@nongnu.org, spice-devel@freedesktop.org
Subject: Re: [Qemu-devel] [Spice-devel] viewing continuous guest virtual memory as continuous in qemu
Date: Tue, 11 Oct 2011 14:21:17 +0200 [thread overview]
Message-ID: <20111011121529.GA1049@bow.tlv.redhat.com> (raw)
In-Reply-To: <4E94284C.2000005@redhat.com>
On Tue, Oct 11, 2011 at 01:28:12PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >AFAIU this works only when the guest allocates a continuous range of
> >physical pages. This is a large requirement from the guest, which I'd
> >like to drop.
>
> Is it? The world is moving to huge pages, with all the stuff needed
> for it like moving around userspace pages to compact memory and make
> huge page allocation easier. I think these days it is alot easier
> to allocate 2M of continuous physical memory than it used to be a
> few years ago. At least on linux, dunno about windows.
>
> When allocating stuff at boot time (say qxl kms driver) allocating
> even larger chunks shouldn't be a big issue. And having a single
> big guest memory chunk, then register that as qxl memory slot is
> what works best with the existing interfaces I guess.
Right, this would work. I was trying to avoid claiming a large chunk up
front. I was also trying to avoid having our own allocator, although I
think that's not really a problem (can be replaced with an in kernel
allocator probably).
>
> Another option we can think about is a 64bit PCI bar for the
> surfaces which can be moved out of the low 4G.
>
I heard this suggested by Avi, so this would allow us to allocate a
large chunk without requiring any memory hole?
> >So I would like to have the guest use a regular
> >allocator, generating for instance two sequential pages in virtual
> >memory that are scattered in physical memory. Those two physical
> >guest page addresses (gp1 and gp2) correspond to two host virtual
> >memory addresses (hv1, hv2). I would now like to provide to
> >spice-server a single virtual address p that maps to those two pages
> >in sequence.
>
> Playing mapping tricks like this doesn't come for free. When doing
> it this way we probaby still want to register a big chunk of memory
> as qxl memory slot so we have the mapping cost only once, not for
> each and every surface we create and destroy.
>
> cheers,
> Gerd
>
> _______________________________________________
> Spice-devel mailing list
> Spice-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/spice-devel
next prev parent reply other threads:[~2011-10-11 12:23 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-02 13:24 [Qemu-devel] viewing continuous guest virtual memory as continuous in qemu Alon Levy
2011-10-02 14:31 ` [Qemu-devel] [Spice-devel] " Alon Levy
2011-10-02 17:12 ` Avi Kivity
2011-10-03 7:49 ` Alon Levy
2011-10-03 8:17 ` Yonit Halperin
2011-10-03 8:37 ` Alon Levy
2011-10-03 8:49 ` Alon Levy
2011-10-03 15:10 ` Avi Kivity
2011-10-11 11:28 ` Gerd Hoffmann
2011-10-11 12:21 ` Alon Levy [this message]
2011-10-11 13:20 ` Gerd Hoffmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111011121529.GA1049@bow.tlv.redhat.com \
--to=alevy@redhat.com \
--cc=airlied@redhat.com \
--cc=kraxel@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=spice-devel@freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).