From: David Hildenbrand <david@redhat.com>
To: Albert Esteve <aesteve@redhat.com>
Cc: qemu-devel@nongnu.org, "Alex Bennée" <alex.bennee@linaro.org>,
slp@redhat.com, hi@alyssa.is, mst@redhat.com,
jasowang@redhat.com, stefanha@redhat.com,
"Stefano Garzarella" <sgarzare@redhat.com>,
stevensd@chromium.org
Subject: Re: [PATCH v3 1/5] vhost-user: Add VIRTIO Shared Memory map request
Date: Wed, 27 Nov 2024 13:17:56 +0100 [thread overview]
Message-ID: <6714e754-8d2d-4751-8b3b-dd7ce469a469@redhat.com> (raw)
In-Reply-To: <44de54b5-8336-4d84-96f0-36f809f48eff@redhat.com>
On 27.11.24 13:10, David Hildenbrand wrote:
> On 27.11.24 11:50, David Hildenbrand wrote:
>>
>>>> RAM memory region/ RAMBlock that has properly set flags/fd/whatssoever
>>>> and map whatever you want in there.
>>>>
>>>> Likely you would need a distinct RAMBlock/RAM memory region per mmap(),
>>>> and would end up mmaping implicitly via qemu_ram_mmap().
>>>>
>>>> Then, your shared region would simply be an empty container into which
>>>> you map these RAM memory regions.
>>>
>>
>> Hi,
>>
>>> Hi, sorry it took me so long to get back to this. Lately I have been
>>> testing the patch and fixing bugs, and I am was going to add some
>>> tests to be able to verify the patch without having to use a backend
>>> (which is what I am doing right now).
>>>
>>> But I wanted to address/discuss this comment. I am not sure of the
>>> actual problem with the current approach (I am not completely aware of
>>> the concern in your first paragraph), but I see other instances where
>>> qemu mmaps stuff into a MemoryRegion.
>>
>> I suggest you take a look at the three relevant MAP_FIXED users outside
>> of user emulation code.
>>
>> (1) hw/vfio/helpers.c: We create a custom memory region + RAMBlock with
>> memory_region_init_ram_device_ptr(). This doesn't mmap(MAP_FIXED)
>> into any existing RAMBlock.
>>
>> (2) system/physmem.c: I suggest you take a close look at
>> qemu_ram_remap() and how it is one example of how RAMBlock
>> properties describe exactly what is mmaped.
>>
>> (3) util/mmap-alloc.c: Well, this is the code that performs the mmap(),
>> to bring a RAMBlock to life. See qemu_ram_mmap().
>>
>> There is some oddity hw/xen/xen-mapcache.c; XEN mapcache seems to manage
>> guest RAM without RAMBlocks.
>>
>>> Take into account that the
>>> implementation follows the definition of shared memory region here:
>>> https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.html#x1-10200010
>>> Which hints to a memory region per ID, not one per required map. So
>>> the current strategy seems to fit it better.
>>
>> I'm confused, we are talking about an implementation detail here? How is
>> that related to the spec?
>>
>>>
>>> Also, I was aware that I was not the first one attempting this, so I
>>> based this code in previous attempts (maybe I should give credit in
>>> the commit now that I think of):
>>> https://gitlab.com/virtio-fs/qemu/-/blob/qemu5.0-virtiofs-dax/hw/virtio/vhost-user-fs.c?ref_type=heads#L75
>>> As you can see, it pretty much follows the same strategy.
>>
>> So, people did some hacky things in a QEMU fork 6 years ago ... :) That
>> cannot possibly be a good argument why we should have it like that in QEMU.
>>
>>> And in my
>>> examples I have been able to use this to video stream with multiple
>>> queues mapped into the shared memory (used to capture video frames),
>>> using the backend I mentioned above for testing. So the concept works.
>>> I may be wrong with this, but for what I understood looking at the
>>> code, crosvm uses a similar strategy. Reserve a memory block and use
>>> for all your mappings, and use an allocator to find a free slot.
>>>
>>
>> Again, I suggest you take a look at what a RAMBlock is, and how it's
>> properties describe the containing mmap().
>>
>>> And if I were to do what you say, those distinct RAMBlocks should be
>>> created when the device starts? What would be their size? Should I
>>> create them when qemu receives a request to mmap? How would the driver
>>> find the RAMBlock?
>>
>> You'd have an empty memory region container into which you will map
>> memory regions that describe the memory you want to share.
>>
>> mr = g_new0(MemoryRegion, 1);
>> memory_region_init(mr, OBJECT(TODO), "vhost-user-shm", region_size);
>>
>>
>> Assuming you are requested to mmap an fd, you'd create a new
>> MemoryRegion+RAMBlock that describes the memory and performs the mmap()
>> for you:
>>
>> map_mr = g_new0(MemoryRegion, 1);
>> memory_region_init_ram_from_fd(map_mr, OBJECT(TODO), "TODO", map_size,
>> RAM_SHARED, map_fd, map_offs, errp);
>>
>> To then map it into your container:
>>
>> memory_region_add_subregion(mr, offset_within_container, map_mr);
>>
>>
>> To unmap, you'd first remove the subregion, to then unref the map_mr.
>>
>>
>>
>> The only alternative would be to do it like (1) above: you perform all
>> of the mmap() yourself, and create it using
>> memory_region_init_ram_device_ptr(). This will set RAM_PREALLOC on the
>> RAMBlock and tell QEMU "this is special, just disregard it". The bad
>> thing about RAM_PREALLOC will be that it will not be compatible with
>> vfio, not communicated to other vhost-user devices etc ... whereby what
>> I describe above would just work with them.
>>
>
> FWIW, I took another look at this patch and I cannot really make sense
> of what you are doing.
>
> In virtio_new_shmem_region(), you allocate a region, but I don't see how
> it would ever get initialized?
>
> In vhost_user_backend_handle_shmem_map(), you then assume that it is
> suddenly a RAM memory region? For example, that
> memory_region_get_ram_ptr() returns something reasonable.
>
> Likely, you really just want to initialize that MR using
> memory_region_init_ram_from_fd(), and have that just perform the proper
> mmap() for you and setup the RAMBlock?
In patch #4 I find: memory_region_init_ram_ptr(vdev->shmem_list[i].mr ...
Which does what I describe as the alternative. That makes it clearer
that you are not operating on arbitrary RAMBlocks. So that should indeed
work.
The way you structured these patches really is suboptimal for review.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-11-27 12:18 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-12 14:53 [PATCH v3 0/5] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2024-09-12 14:53 ` [PATCH v3 1/5] vhost-user: Add VIRTIO Shared Memory map request Albert Esteve
2024-09-16 17:21 ` Stefan Hajnoczi
2024-11-25 9:55 ` Albert Esteve
2024-09-17 10:07 ` David Hildenbrand
2024-11-25 8:28 ` Albert Esteve
2024-11-27 10:50 ` David Hildenbrand
2024-11-27 12:10 ` David Hildenbrand
2024-11-27 12:17 ` David Hildenbrand [this message]
2024-11-27 12:31 ` Albert Esteve
2024-11-27 12:40 ` David Hildenbrand
2024-11-27 12:55 ` Albert Esteve
2024-09-12 14:53 ` [PATCH v3 2/5] virtio: Track shared memory mappings Albert Esteve
2024-09-16 17:27 ` Stefan Hajnoczi
2024-09-12 14:53 ` [PATCH v3 3/5] vhost_user: Add frontend command for shmem config Albert Esteve
2024-09-17 7:48 ` Stefan Hajnoczi
2024-09-17 7:50 ` Stefan Hajnoczi
2024-09-12 14:53 ` [PATCH v3 4/5] vhost-user-dev: Add cache BAR Albert Esteve
2024-09-17 8:27 ` Stefan Hajnoczi
2024-11-25 16:16 ` Albert Esteve
2024-11-26 7:55 ` Albert Esteve
2024-11-26 12:51 ` Albert Esteve
2024-09-17 8:32 ` Stefan Hajnoczi
2024-09-17 8:36 ` Stefan Hajnoczi
2024-09-12 14:53 ` [PATCH v3 5/5] vhost_user: Add MEM_READ/WRITE backend requests Albert Esteve
2024-09-12 14:59 ` [PATCH v3 0/5] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2024-09-16 17:57 ` Stefan Hajnoczi
2024-09-17 7:05 ` Albert Esteve
2024-09-17 7:43 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6714e754-8d2d-4751-8b3b-dd7ce469a469@redhat.com \
--to=david@redhat.com \
--cc=aesteve@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=hi@alyssa.is \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=slp@redhat.com \
--cc=stefanha@redhat.com \
--cc=stevensd@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).