From: Alexander Gordeev <alexander.gordeev@opensynergy.com>
To: Alexandre Courbot <acourbot@chromium.org>
Cc: "Cornelia Huck" <cohuck@redhat.com>,
virtio-dev@lists.oasis-open.org,
"Keiichi Watanabe" <keiichiw@chromium.org>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Marcin Wojtas" <mwojtas@google.com>,
"Matti Möll" <Matti.Moell@opensynergy.com>,
"Andrew Gazizov" <andrew.gazizov@opensynergy.com>,
"Enrico Granata" <egranata@google.com>,
"Gustavo Padovan" <gustavo.padovan@collabora.com>,
"Peter Griffin" <peter.griffin@linaro.org>,
"Bartłomiej Grzesik" <bag@semihalf.com>,
"Tomasz Figa" <tfiga@chromium.org>,
"Daniel Almeida" <daniel.almeida@collabora.com>,
"Enric Balletbo i Serra" <eballetb@redhat.com>,
"Albert Esteve" <aesteve@redhat.com>
Subject: Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification
Date: Fri, 28 Apr 2023 09:47:46 +0200 [thread overview]
Message-ID: <cbed366a-7f12-8642-18ef-350981279241@opensynergy.com> (raw)
In-Reply-To: <CAPBb6MVRPn6dM8ap9uDvX_ZHdHGvooq+LB7UGDaWBEuUtK3x=A@mail.gmail.com>
On 27.04.23 15:16, Alexandre Courbot wrote:
> On Thu, Apr 27, 2023 at 12:11 AM Alexander Gordeev
> <alexander.gordeev@opensynergy.com> wrote:
>>
>> On 21.04.23 06:02, Alexandre Courbot wrote:
>>> Hi Alexander,
>>>
>>> On Mon, Apr 17, 2023 at 9:52 PM Alexander Gordeev
>>> <alexander.gordeev@opensynergy.com> wrote:
>>>>
>>>> Hi Alexandre,
>>>>
>>>> Thanks for you letter! Sorry, it took me some time to write an answer.
>>>>
>>>> First of all I'd like to describe my perspective a little bit because it
>>>> seems, that in many cases we (and other people writing their feedbacks)
>>>> simply have very different priorities and background.
>>>>
>>>> OpenSynergy, the company that I work for, develops a proprietary
>>>> hypervisor called COQOS mainly for automotive and aerospace domains. We
>>>> have our proprietary device implementations, but overall our goal is to
>>>> bring open standards into these quite closed domains and we're betting
>>>> big on virtio. The idea is to run safety-critical functions like cockpit
>>>> controller alongside with multimedia stuff in different VMs on the same
>>>> physical board. Right now they have it on separate physical devices. So
>>>> they already have maximum isolation. And we're trying to make this
>>>> equally safe on a single board. The benefit is the reduced costs and
>>>> some additional features. Of course, we also need features here, but at
>>>> the same time security and ease of certification are among the top of
>>>> our priorities. Nobody wants cars or planes to have security problems,
>>>> right? Also nobody really needs DVB and even more exotic devices in cars
>>>> and planes AFAIK.
>>>>
>>>> For the above mentioned reasons our COQOS hypervisor is running on bare
>>>> metal. Also memory management for the guests is mostly static. It is
>>>> possible to make a shared memory region between a device and a driver
>>>> managed by device in advance. But definitely no mapping of random host
>>>> pages on the fly is supported.
>>>>
>>>> AFAIU crosvm is about making Chrome OS more secure by putting every app
>>>> in its own virtualized environment, right?
>>>
>>> Not really, but for the discussion here you can assume that it is a
>>> VMM similar to QEmu with KVM enabled.
>>
>> Thanks for the clarification. If my idea about your use-case is not
>> totally correct, then it would be very helpful if you can provide more
>> details about it.
>
> It's nothing fancy ; Linux host, Linux (e.g. Android) guests.
Thanks for the explanation.
> But
> virtio being a standard, we should focus on making something that is
> usable by everyone instead of individual use-cases.
In an ideal world this would be possible. But in practice we have
various constraints and priorities. So sometimes it makes sense to
achieve more by digging in different directions instead of arguing
endlessly.
>>>> Also this mode of
>>>> operation is not supported in our hypervisor for reasons mentioned
>>>> above. So in our case this PoC doesn't yet prove anything unfortunately.
>>>
>>> I did not have your use-case in mind while writing the PoC, its
>>> purpose was to demonstrate the suitability of V4L2 as a protocol for
>>> virtualizing video.
>>>
>>> Now if your hypervisor does static memory management and pre-allocates
>>> memory for guest buffers, then the V4L2 MMAP memory type actually
>>> looks like the best fit for the job. There are no tokens like virtio
>>> objects UUID to manage, and the MMAP request can be as simple as
>>> returning the pre-mapped address of the buffer in the guest PAS.
>>>
>>> If instead it carves some predefined amount of memory out for the
>>> whole guest and expects it to allocate buffer memory from there, then
>>> the USERPTR memory type (which works like the guest pages of
>>> virtio-video) is what you want to use.
>>
>> It doesn't look like a good idea to us. This means preconfiguring memory
>> regions in the hypervisor config. It is hard to predict the amount of
>> memory, that is necessary. If we allocate too much, this is a waste of
>> memory. If we allocate too little, it won't be enough. Then we don't
>> know yet how to make V4L2 allocate from that memory. Then this memory
>> has to be managed on the host side. And memory management is exactly the
>> thing, that causes most security issues, right? So overall this is very
>> tedious, potentially wasteful and not flexible.
>
> My last paragraph mentions that you can also let the guest manage the
> buffer memory from its own RAM. Or maybe I am missing how memory is
> managed on your hypervisor, but if that's the case elaborate on where
> you want the buffer memory to come from.
Hmm, I think what you write is indeed not different from using guest
memory as it is done in virtio-video. This maps to USERPTR indeed.
Well, it is nothing special. The hypervisor allocates memory for the
guest, then guest is free to manage it. It can also give this memory to
the host.
It is possible to statically create another memory region, that is
managed by the device side and is accessible by the guest. But as I
wrote this is not what we want for various reasons.
>>>> Static allocations of shared memory regions are possible.
>>>> But then we have to tell V4L2 to allocate buffers there. Then we'll need
>>>> a region per virtual device. This is just very tedious and inflexible.
>>>> That's why we're mainly interested in having the guest pages sharing in
>>>> the virtio video spec.
>>>
>>> I'll be happy to update the PoC and make it able to use guest pages as
>>> buffer backing memory. It just wasn't the priority to demonstrate the
>>> global approach.
>>
>> Great, thank you. If you have a concrete plan already, I think it could
>> be beneficial to discuss it now. Otherwise I'd prefer to keep working on
>> the current approach until I see something concrete.
>
> Just give me a couple more weeks and I think I can produce the code.
> But I'm afraid you have already made up your mind anyway.
I had indeed. So IMO it is pointless right now. I simply wanted to have
a detailed plan just in case and make sure, that we both understand it
in the same way. If in the end we're forced to compromise, than this has
to be implemented (or reimplemented?).
>>>>>> b. For modern cameras the V4L2 interface is not enough anyway. This
>>>>>> was already discussed AFAIR. There is a separate virtio-camera
>>>>>> specification, that indeed is based on V4L2 UAPI as you said. But
>>>>>> combining these two specs is certainly not future proof, right? So I
>>>>>> think it is best to let the virtio-camera spec to be developed
>>>>>> independently.
>>>>> I don't know if virtio-camera has made progress that they have not
>>>>> published yet, but from what I have seen virtio-v4l2 can cover
>>>>> everything that the currently published driver does (I could not find
>>>>> a specification, but please point me to it if it exists), so there
>>>>> would be no conflict to resolve.
>>>>>
>>>>> V4L2 with requests support should be capable of handling complex
>>>>> camera configurations, but the effort indeed seems to have switched to
>>>>> KCAM when it comes to supporting complex native cameras natively. That
>>>>> being said:
>>>>>
>>>>> * KCAM is not merged yet, is probably not going to be for some time
>>>>> (https://lwn.net/Articles/904776/), and we don't know how we can
>>>>> handle virtualization with it,
>>>>> * The fact that the camera is complex on the host does not mean that
>>>>> all that complexity needs to be exposed to the guest. I don't know how
>>>>> the camera folks want to manage this, but one can imagine that the
>>>>> host could expose a simpler model for the virtual camera, with only
>>>>> the required knobs, while the host takes care of doing all the complex
>>>>> configuration.
>>>>> * The counter argument can be made that simple camera devices do not
>>>>> need a complex virtualization solution, so one can also invoke
>>>>> simplicity here to advocate for virtio-v4l2.
>>>>>
>>>>> My point is not to say that all other camera virtualization efforts
>>>>> should be abandoned - if indeed there is a need for something more
>>>>> specific, then nothing prevents us from having a virtio-camera
>>>>> specification added. However, we are nowhere close to this at the
>>>>> moment, and right now there is no official solution for camera
>>>>> virtualization, so I see no reason to deny the opportunity to support
>>>>> simple camera devices since its cost would just be to add "and cameras
>>>>> device" in the paragraph of the spec that explains what devices are
>>>>> supported.
>>>>
>>>> Well, for reasons described above it still seems perfectly fine to me to
>>>> have separate devices. Ok, the argument, that this approach also seems
>>>> more future-proof, is not a strong one.
>>>
>>> Please elaborate on its weaknesses then.
>>
>> Well, as you said basically. The weakness of the argument is that the
>> virtio-camera is not yet published, the KCAM is not merged yet, so yeah,
>> the future is not clear actually.
>>
>> BTW I just thought about one more case, that is already real: sharing
>> camera streams with pipewire. I think pipewire doesn't provide a V4L2
>> UAPI interface, right?
>
> I believe it does: https://archlinux.org/packages/extra/x86_64/pipewire-v4l2/
Oh, indeed. Thanks for the link!
> This package contains an LD_PRELOAD library that redirects v4l2
applications to PipeWire.
Well, it is cool, that they did this for the compatibility. But this
doesn't look clean and straightforward enough to use this for a new
development.
> But in any case, that's irrelevant to the guest-host interface, and I
> think a big part of the disagreement stems from the misconception that
> V4L2 absolutely needs to be used on either the guest or the host,
> which is absolutely not the case.
I understand this, of course. I'm arguing, that it is harder to
implement it, get it straight and then maintain it over years. Also it
brings limitations, that sometimes can be workarounded in the virtio
spec, but this always comes at a cost of decreased readability and
increased complexity. Overall it looks clearly as a downgrade compared
to virtio-video for our use-case. And I believe it would be the same for
every developer, that has to actually implement the spec, not just do
the pass through. So if we think of V4L2 UAPI pass through as a
compatibility device (which I believe it is), then it is fine to have
both and keep improving the virtio-video, including taking the best
ideas from the V4L2 and overall using it as a reference to make writing
the driver simpler.
>>>> 2. There is no flexibility to choose whatever way of memory management
>>>> host and guest would like to use. Now the guest user-space application
>>>> selects this.
>>>
>>> Errr no. The guest user-space chooses a type of memory from what the
>>> guest kernel exposes, which depends on what the host itself decides to
>>> expose.
>>
>> I don't agree. If an already written user-space app supports only MMAP,
>> then there is no way to force it use USERPTR, right? Please correct me
>> if I'm wrong.
>
> The memory types exposed by the guest kernel do not need to match
> those exposed by the hypervisor or that the guest kernel chooses to
> use.
>
> For instance, imagine that the hypervisor does not support allocating
> buffer memory - i.e. it does not support the MMAP memory type. The
> guest will then have to use its own memory for buffer allocation, and
> send them to the host with the USERPTR memory type.
>
> Now if a guest user-space application only supports MMAP, that's not a
> problem at all. Most V4L2 drivers allocate MMAP buffers from regular
> memory. So when the application requests MMAP buffers, the guest
> kernel can honor this request by allocating some memory itself, and
> changes it to USERPTR when passing the request to the hypervisor so it
> knows that guest memory is in use.
>
> I am responsible for this misconception since I insisted on using the
> same memory types (MMAP, USERPTR, DMABUF) as V4L2 for guest/host
> communication, which is misleading. It would probably be less
> confusing to define new types (HOST, GUEST and VIRTIO_OBJ) just for
> virtio-v4l2 and forbid the use of the kernel/user space memory types.
>
> With these new names, I think it is clear that we have the exact
> feature set of virtio-video (guest memory and virtio objects) covered,
> plus another one where we allow the host to perform the allocation
> itself (which may be useful if the video device has its own dedicated
> memory). Again, this is only for host/guest communication. Guest
> kernel/userspace is a different thing and can be implemented in
> different ways depending on what the host supports.
Thank you very much! This is basically the same thing, that I called
"rewriting the flow of V4L2 UAPI calls on the fly" earlier. You can find
the quote in this email below. As I wrote in the quote, this already
seems hackish to me.
Ensuring consistency everywhere, always remembering in which context a
particular VIDIOC_SOMETHING command is used. Is this command used in
host to guest interaction or in kernel to user-space interaction? You're
going to have problems grepping the code. Well, maybe it can be clearly
separated from each other and from the pass through case. I'm sure it
will still create a lot of confusion.
You went further and suggested to redefine the memory types for
host-guest interactions. I agree completely! This is absolutely the
right thing to do. Mapping original V4L2 memory types would be awkward.
At the same time you now have to remember, that there are two
definitions of enum v4l2_memory, one in the spec, and one in
buffer.html. And you have to remember, that the one from the spec has
priority for host-guest interactions, and the one from buffer.html has
priority for kernel to user-space interactions. That's why I call it a
workaround.
This reminds me another benchmark for code quality: the amount of WTFs
per minute. :)
>>>> The latter makes the solution much less flexible IMO. For example, this
>>>> won't work well with our hypervisor. There might other special needs in
>>>> other use-cases. Like sharing these object UUIDs. Probably this can
>>>> handled by mapping, for example, V4L2_MEMORY_USERPTR to guest-pages
>>>> sharing, V4L2_MEMORY_DMABUF to the UUIDs (which is not quite correct
>>>> IMHO).
>>>
>>> Please elaborate on why this is not correct.
>>
>> Because IMHO UUIDs pointing to memory allocated by virtio-gpu are quite
>> different dmabufs created in the guest with udmabuf, for example. This
>> can be confusing.
>
> True, and that's another reason to define our own memory types to
> remove that confusion.
Yes, and I'm very happy we agree on this.
>>>> So this already means querying the device for supported sharing
>>>> methods, rewriting the flow of V4L2 UAPI calls on the fly, ensuring
>>>> consistency, etc. This already looks hackish to me. Do you have a better
>>>> plan?
>>>
>>> How do you support different kinds of memory without querying? Or do
>>> you suggest we stick to a single one?
>>>
>>> I am also not quite sure what you mean by "rewriting the flow of V4L2
>>> UAPI calls on the fly". There is no "rewriting" - V4L2 structures are
>>> just used to communicate with the host instead of virtio-video
>>> structures.
>>
>> I'd like to know your ideas or better a concrete plan for enabling
>> user-space apps, that only support MMAP, to work on top of a device,
>> that supports only guest pages sharing.
>
> Hopefully my explanation above clears that.
Yes, thank you.
--
Alexander Gordeev
Senior Software Engineer
OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin
Phone: +49 30 60 98 54 0 - 88
Fax: +49 (30) 60 98 54 0 - 99
EMail: alexander.gordeev@opensynergy.com
www.opensynergy.com
Handelsregister/Commercial Registry: Amtsgericht Charlottenburg, HRB 108616B
Geschäftsführer/Managing Director: Régis Adjamah
Please mind our privacy notice<https://www.opensynergy.com/datenschutzerklaerung/privacy-notice-for-business-partners-pursuant-to-article-13-of-the-general-data-protection-regulation-gdpr/> pursuant to Art. 13 GDPR. // Unsere Hinweise zum Datenschutz gem. Art. 13 DSGVO finden Sie hier.<https://www.opensynergy.com/de/datenschutzerklaerung/datenschutzhinweise-fuer-geschaeftspartner-gem-art-13-dsgvo/>
---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
next prev parent reply other threads:[~2023-04-28 7:47 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-08 7:23 [virtio-dev] [RFC PATCH v6] virtio-video: Add virtio video device specification Alexandre Courbot
2022-12-08 15:00 ` Cornelia Huck
2022-12-27 5:38 ` Alexandre Courbot
2023-01-11 8:45 ` Cornelia Huck
2023-01-12 6:32 ` Alexandre Courbot
2023-01-12 15:23 ` Cornelia Huck
2022-12-19 16:59 ` [virtio-dev] " Alexander Gordeev
2022-12-20 9:51 ` Cornelia Huck
2022-12-20 10:35 ` Alexander Gordeev
2022-12-20 17:39 ` Cornelia Huck
2022-12-21 14:56 ` Alexander Gordeev
2022-12-27 7:31 ` Alexandre Courbot
2023-01-11 18:42 ` Alexander Gordeev
2023-01-11 20:13 ` Alex Bennée
2023-01-12 6:40 ` Alexandre Courbot
2023-01-12 6:39 ` Alexandre Courbot
2023-01-18 23:06 ` Alexander Gordeev
2023-02-06 14:12 ` Cornelia Huck
2023-02-07 6:16 ` Alexandre Courbot
2023-02-07 13:59 ` Cornelia Huck
2023-03-10 10:50 ` Cornelia Huck
2023-03-10 13:19 ` Alexandre Courbot
2023-03-10 14:20 ` Cornelia Huck
2023-03-14 5:06 ` Alexandre Courbot
2023-03-16 10:12 ` Alexander Gordeev
2023-03-17 7:24 ` Alexandre Courbot
2023-04-17 12:51 ` Alexander Gordeev
2023-04-17 14:43 ` Cornelia Huck
2023-04-19 7:39 ` Alexander Gordeev
2023-04-19 21:34 ` Enrico Granata
2023-04-21 14:48 ` Alexander Gordeev
2023-04-21 4:02 ` Alexandre Courbot
2023-04-21 16:01 ` Alexander Gordeev
2023-04-24 7:52 ` Alexander Gordeev
2023-04-25 16:04 ` Cornelia Huck
2023-04-26 6:29 ` Alexandre Courbot
2023-04-27 14:10 ` Alexander Gordeev
2023-04-28 4:02 ` Alexandre Courbot
2023-04-28 8:54 ` Alexander Gordeev
2023-05-02 1:07 ` Alexandre Courbot
2023-05-02 11:12 ` Alexander Gordeev
2023-04-26 5:52 ` Alexandre Courbot
2023-04-27 14:20 ` Alexander Gordeev
2023-04-28 3:22 ` Alexandre Courbot
2023-04-28 8:22 ` Alexander Gordeev
2023-04-26 15:52 ` Alexander Gordeev
2023-04-27 13:23 ` Alexandre Courbot
2023-04-27 15:12 ` Alexander Gordeev
2023-04-28 3:24 ` Alexandre Courbot
2023-04-28 8:31 ` Alexander Gordeev
[not found] ` <CALgKJBqKWng508cB_F_uD2fy9EAvQ36rYR3fRb57sFd3ihpUFw@mail.gmail.com>
2023-04-26 16:00 ` Alexander Gordeev
2023-04-27 10:13 ` Bartłomiej Grzesik
2023-04-27 14:34 ` Alexander Gordeev
2023-04-28 3:22 ` Alexandre Courbot
2023-04-28 7:57 ` Alexander Gordeev
2023-04-21 4:02 ` Alexandre Courbot
2023-04-26 15:11 ` Alexander Gordeev
2023-04-27 13:16 ` Alexandre Courbot
2023-04-28 7:47 ` Alexander Gordeev [this message]
2023-05-03 14:04 ` Cornelia Huck
2023-05-03 15:11 ` Alex Bennée
2023-05-03 15:53 ` Cornelia Huck
2023-05-05 9:57 ` Alexander Gordeev
[not found] ` <168329085253.1880445.14002473591422425775@Monstersaurus>
2023-05-05 15:55 ` Alex Bennée
2023-05-16 12:57 ` Alexander Gordeev
[not found] ` <20230506081229.GA8114@pendragon.ideasonboard.com>
[not found] ` <20230506081633.GB8114@pendragon.ideasonboard.com>
2023-05-08 8:00 ` [virtio-dev] Re: [libcamera-devel] " Alexandre Courbot
2023-05-16 13:50 ` Alexander Gordeev
2023-05-17 3:58 ` Tomasz Figa
2023-05-05 12:28 ` Alexander Gordeev
2023-05-05 11:54 ` Alexander Gordeev
2023-05-08 4:55 ` Alexandre Courbot
2023-05-11 8:50 ` Alexander Gordeev
2023-05-11 9:00 ` Alexander Gordeev
2023-05-12 4:15 ` Alexandre Courbot
2023-05-17 7:35 ` Alexander Gordeev
2023-05-12 4:09 ` Alexandre Courbot
2023-05-16 14:53 ` Alexander Gordeev
2023-05-17 16:28 ` Cornelia Huck
2023-05-18 6:29 ` Alexander Gordeev
2023-05-18 19:35 ` Michael S. Tsirkin
2023-05-17 11:04 ` Alexander Gordeev
2023-03-27 13:00 ` Albert Esteve
2023-04-15 5:58 ` Alexandre Courbot
2023-04-17 12:56 ` Cornelia Huck
2023-04-17 13:13 ` Alexander Gordeev
2023-04-17 13:22 ` Cornelia Huck
2023-02-07 11:11 ` Alexander Gordeev
2023-02-07 6:51 ` Alexandre Courbot
2023-02-07 10:57 ` Alexander Gordeev
2023-01-11 17:04 ` Alexander Gordeev
2023-01-12 6:32 ` Alexandre Courbot
2023-01-12 22:24 ` Alexander Gordeev
2023-01-11 18:45 ` Alexander Gordeev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cbed366a-7f12-8642-18ef-350981279241@opensynergy.com \
--to=alexander.gordeev@opensynergy.com \
--cc=Matti.Moell@opensynergy.com \
--cc=acourbot@chromium.org \
--cc=aesteve@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=andrew.gazizov@opensynergy.com \
--cc=bag@semihalf.com \
--cc=cohuck@redhat.com \
--cc=daniel.almeida@collabora.com \
--cc=eballetb@redhat.com \
--cc=egranata@google.com \
--cc=gustavo.padovan@collabora.com \
--cc=keiichiw@chromium.org \
--cc=mwojtas@google.com \
--cc=peter.griffin@linaro.org \
--cc=tfiga@chromium.org \
--cc=virtio-dev@lists.oasis-open.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox