From: geoff@hostfission.com
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: RFC: New device for zero-copy VM memory access
Date: Thu, 31 Oct 2019 22:52:53 +1100 [thread overview]
Message-ID: <6b6f4e3222fda0945f68e02ae3560ca5@hostfission.com> (raw)
In-Reply-To: <88f1c3701740665b0ebe2f24c8ce7ade@hostfission.com>
Another update to this that adds support for unmap notification. The
device
has also been renamed to `porthole` and now resides here:
https://github.com/gnif/qemu/blob/master/hw/misc/porthole.c
And here is the updated Linux test client.
https://gist.github.com/gnif/77e7fb54604b42a1a98ecb8bf3d2cf46
-Geoff
On 2019-10-31 13:55, geoff@hostfission.com wrote:
> Hi Dave,
>
> On 2019-10-31 05:52, Dr. David Alan Gilbert wrote:
>> * geoff@hostfission.com (geoff@hostfission.com) wrote:
>>> Hi All,
>>>
>>> Over the past week, I have been working to come up with a solution to
>>> the
>>> memory transfer performance issues that hinder the Looking Glass
>>> Project.
>>>
>>> Currently Looking Glass works by using the IVSHMEM shared memory
>>> device
>>> which
>>> is fed by an application that captures the guest's video output.
>>> While this
>>> works it is sub-optimal because we first have to perform a CPU copy
>>> of the
>>> captured frame into shared RAM, and then back out again for display.
>>> Because
>>> the destination buffers are allocated by closed proprietary code
>>> (DirectX,
>>> or
>>> NVidia NvFBC) there is no way to have the frame placed directly into
>>> the
>>> IVSHMEM shared ram.
>>>
>>> This new device, currently named `introspection` (which needs a more
>>> suitable
>>> name, porthole perhaps?), provides a means of translating guest
>>> physical
>>> addresses to host virtual addresses, and finally to the host offsets
>>> in RAM
>>> for
>>> file-backed memory guests. It does this by means of a simple protocol
>>> over a
>>> unix socket (chardev) which is supplied the appropriate fd for the
>>> VM's
>>> system
>>> RAM. The guest (in this case, Windows), when presented with the
>>> address of a
>>> userspace buffer and size, will mlock the appropriate pages into RAM
>>> and
>>> pass
>>> guest physical addresses to the virtual device.
>>
>> Hi Geroggrey,
>> I wonder if the same thing can be done by using the existing
>> vhost-user
>> mechanism.
>>
>> vhost-user is intended for implementing a virtio device outside of
>> the
>> qemu process; so it has a character device that qemu passes commands
>> down
>> to the other process, where qemu mostly passes commands via the virtio
>> queues. To be able to read the virtio queues, the external process
>> mmap's the same memory as the guest - it gets passed a 'set mem table'
>> command by qemu that includes fd's for the RAM, and includes
>> base/offset
>> pairs saying that a particular chunk of RAM is mapped at a particular
>> guest physical address.
>>
>> Whether or not you make use of virtio queues, I think the mechanism
>> for the device to tell the external process the mappings might be what
>> you're after.
>>
>> Dave
>>
>
> While normally I would be all for re-using such code, the vhost-user
> while
> being very feature-complete from what I understand is overkill for our
> requirements. It will still allocate a communication ring and an events
> system
> that we will not be using. The goal of this device is to provide a dumb
> &
> simple method of sharing system ram, both for this project and for
> others that
> work on a simple polling mechanism, it is not intended to be an
> end-to-end
> solution like vhost-user is.
>
> If you still believe that vhost-user should be used, I will do what I
> can to
> implement it, but for such a simple device I honestly believe it is
> overkill.
>
> -Geoff
>
>>> This device and the windows driver have been designed in such a way
>>> that
>>> it's a
>>> utility device for any project and/or application that could make use
>>> of it.
>>> The PCI subsystem vendor and device ID are used to provide a means of
>>> device
>>> identification in cases where multiple devices may be in use for
>>> differing
>>> applications. This also allows one common driver to be used for any
>>> other
>>> projects wishing to build on this device.
>>>
>>> My ultimate goal is to get this to a state where it could be accepted
>>> upstream
>>> into Qemu at which point Looking Glass would be modified to use it
>>> instead
>>> of
>>> the IVSHMEM device.
>>>
>>> My git repository with the new device can be found at:
>>> https://github.com/gnif/qemu
>>>
>>> The new device is:
>>> https://github.com/gnif/qemu/blob/master/hw/misc/introspection.c
>>>
>>> Looking Glass:
>>> https://looking-glass.hostfission.com/
>>>
>>> The windows driver, while working, needs some cleanup before the
>>> source is
>>> published. I intend to maintain both this device and the windows
>>> driver
>>> including producing a signed Windows 10 driver if Redhat are
>>> unwilling or
>>> unable.
>>>
>>> Kind Regards,
>>> Geoffrey McRae
>>>
>>> HostFission
>>> https://hostfission.com
>>>
>> --
>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2019-10-31 11:55 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-29 14:31 RFC: New device for zero-copy VM memory access geoff
2019-10-29 22:53 ` geoff
2019-10-30 8:10 ` geoff
2019-10-30 18:52 ` Dr. David Alan Gilbert
2019-10-31 2:55 ` geoff
2019-10-31 11:52 ` geoff [this message]
2019-10-31 12:36 ` Peter Maydell
2019-10-31 13:24 ` Dr. David Alan Gilbert
2019-10-31 14:18 ` geoff
2019-10-31 14:52 ` Peter Maydell
2019-10-31 15:21 ` geoff
2019-10-31 15:52 ` Dr. David Alan Gilbert
2019-11-03 10:10 ` geoff
2019-11-03 11:03 ` geoff
2019-11-04 11:55 ` Dr. David Alan Gilbert
2019-11-04 12:05 ` geoff
2019-11-04 16:35 ` Dr. David Alan Gilbert
2019-11-05 10:05 ` Marc-André Lureau
2019-11-26 18:25 ` Dr. David Alan Gilbert
2019-11-04 10:26 ` Gerd Hoffmann
2019-11-04 10:31 ` geoff
2019-11-05 9:38 ` Gerd Hoffmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6b6f4e3222fda0945f68e02ae3560ca5@hostfission.com \
--to=geoff@hostfission.com \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).