From: Alexander Graf <agraf@suse.de>
To: Ian Molton <ian.molton@collabora.co.uk>
Cc: cam@cs.ualberta.ca, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Address translation - virt->phys->ram
Date: Mon, 22 Feb 2010 19:56:59 +0100 [thread overview]
Message-ID: <4B82D37B.2040906@suse.de> (raw)
In-Reply-To: <4B82C325.5020300@collabora.co.uk>
Ian Molton wrote:
> Anthony Liguori wrote:
>
>> On 02/22/2010 10:46 AM, Ian Molton wrote:
>>
>>> Anthony Liguori wrote:
>>>
>>>
>>>
>>>> cpu_physical_memory_map().
>>>>
>>>> But this function has some subtle characteristics. It may return a
>>>> bounce buffer if you attempt to map MMIO memory. There is a limited
>>>> pool of bounce buffers available so it may return NULL in the event that
>>>> it cannot allocate a bounce buffer.
>>>>
>>>> It may also return a partial result if you're attempting to map a region
>>>> that straddles multiple memory slots.
>>>>
>>>>
>>> Thanks. I had found this, but was unsure as to wether it was quite what
>>> I wanted. (also is it possible to tell when it has (eg.) allocated a
>>> bounce buffer?)
>>>
>>> Basically, I need to get buffer(s) from guest userspace into the hosts
>>> address space. The buffers are virtually contiguous but likely
>>> physically discontiguous. They are allocated with malloc() and theres
>>> nothing I can do about that.
>>>
>>> The obvious but slow solution would be to copy all the buffers into nice
>>> virtio-based scatter/gather buffers and feed them to the host that way,
>>> however its not fast enough.
>>>
>>>
>> Why is this slow?
>>
>
> Because the buffers will all have to be copied. So far, switching from
> abusing an instruction to interrupt qemu to using virtio has incurred a
> roughly 5x slowdown. I'd guess much of this is down to the fact we have
> to switch to kernel-mode on the guest and back again for every single GL
> call...
>
> If I can establish some kind of stable guest_virt->phys->host_virt
> mapping, many of the problems will just 'go away'. a way to interrupt
> qemu from user-mode on the guest without involving the guest kernel
> would be quite awesome also (theres really nothing we want the kernel to
> actually /do/ here, it just adds overhead).
>
I guess what you really want is some shm region between host and guess
that you can use as ring buffer. Then you could run a timer on the host
side to flush it or have some sort of callback when you urgently need to
flush it manually.
The benefit here is that you can actually make use of multiple threads.
There's no need to intercept the guest at all just because it wants to
issue some GL operations.
Alex
next prev parent reply other threads:[~2010-02-22 18:57 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-22 13:59 [Qemu-devel] Address translation - virt->phys->ram Ian Molton
2010-02-22 14:35 ` Anthony Liguori
2010-02-22 16:46 ` Ian Molton
2010-02-22 16:52 ` Anthony Liguori
2010-02-22 17:47 ` Ian Molton
2010-02-22 18:56 ` Alexander Graf [this message]
2010-02-23 15:46 ` Ian Molton
2010-02-23 15:54 ` Alexander Graf
2010-02-23 16:21 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B82D37B.2040906@suse.de \
--to=agraf@suse.de \
--cc=cam@cs.ualberta.ca \
--cc=ian.molton@collabora.co.uk \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).