qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] gpu and console chicken and egg
@ 2013-12-04  7:02 Dave Airlie
  2013-12-04  8:23 ` Gerd Hoffmann
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Airlie @ 2013-12-04  7:02 UTC (permalink / raw)
  To: qemu-devel@nongnu.org

So I've hit a bit of a init ordering issue that I'm not sure how best to solve,

Just some background:
In order for the virt GPU and the UI layer (SDL or GTK etc) to
interact properly over OpenGL use, I have created and OpenGL provider
in the console, and the UI layer can register callbacks for a single
GL provider (only one makes sense really) when it starts up. This is
mainly to be used for context management and swap buffers management.

Now in the virtio GPU I'd was going to use a virtio feature to say
whether the qemu hw can support the 3D renderer, dependant on whether
it was linked with the virgl renderer and whether the current UI was
GL capable.

I also have the virtio gpu code checking in its update_display
callback whether the first console has acquired a GL backend.

Now the problem:
The virtio hw is initialised before the console layers. So the feature
bits are all set at that point, before the UI ever registers a GL
interface layer. So is there a method to modify the advertised feature
bits later in the setup sequence before the guest is started? Can I
call something from the update display callback?

Otherwise I was thinking I would need something in my config space on
top of feature bits to say the hw is actually 3d capable.

Dave.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] gpu and console chicken and egg
  2013-12-04  7:02 [Qemu-devel] gpu and console chicken and egg Dave Airlie
@ 2013-12-04  8:23 ` Gerd Hoffmann
  2013-12-04 21:44   ` Dave Airlie
  0 siblings, 1 reply; 5+ messages in thread
From: Gerd Hoffmann @ 2013-12-04  8:23 UTC (permalink / raw)
  To: Dave Airlie; +Cc: qemu-devel@nongnu.org

On Mi, 2013-12-04 at 17:02 +1000, Dave Airlie wrote:
> So I've hit a bit of a init ordering issue that I'm not sure how best to solve,
> 
> Just some background:
> In order for the virt GPU and the UI layer (SDL or GTK etc) to
> interact properly over OpenGL use, I have created and OpenGL provider
> in the console, and the UI layer can register callbacks for a single
> GL provider (only one makes sense really) when it starts up. This is
> mainly to be used for context management and swap buffers management.
> 
> Now in the virtio GPU I'd was going to use a virtio feature to say
> whether the qemu hw can support the 3D renderer, dependant on whether
> it was linked with the virgl renderer and whether the current UI was
> GL capable.

Hmm, why does it depend on the UI?  Wasn't the plan to render into a
dma-buf no matter what?  Then either read the rendered result from the
dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
to the compositor?

Also note that the virtio-gpu gl-capability needs to be configurable for
live migration reasons, so you can migrate between hosts with different
3d capabilities.  Something like -device
virtio-gpu,gl={none,gl2,gl3,host} where "none" turns it off, "gl
$version" specifies the gl support level and "host" makes it depend on
the host capabilities (simliar to -cpu host).  For starters you can
leave out "host" and depend on the user set this.

> So is there a method to modify the advertised feature
> bits later in the setup sequence before the guest is started?

You can register a callback to be notified when the guest is
started/stopped (qemu_add_vm_change_state_handler).  That could be used
although it is a bit hackish.

cheers,
  Gerd

PS: Now that 1.7 is out of the door and 2.0 tree is open for development
    we should start getting the bits which are ready merged to make your
    patch backlog smaller.  SDL2 would be a good start I think.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] gpu and console chicken and egg
  2013-12-04  8:23 ` Gerd Hoffmann
@ 2013-12-04 21:44   ` Dave Airlie
  2013-12-05  8:52     ` Gerd Hoffmann
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Airlie @ 2013-12-04 21:44 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: qemu-devel@nongnu.org

On Wed, Dec 4, 2013 at 6:23 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
> On Mi, 2013-12-04 at 17:02 +1000, Dave Airlie wrote:
>> So I've hit a bit of a init ordering issue that I'm not sure how best to solve,
>>
>> Just some background:
>> In order for the virt GPU and the UI layer (SDL or GTK etc) to
>> interact properly over OpenGL use, I have created and OpenGL provider
>> in the console, and the UI layer can register callbacks for a single
>> GL provider (only one makes sense really) when it starts up. This is
>> mainly to be used for context management and swap buffers management.
>>
>> Now in the virtio GPU I'd was going to use a virtio feature to say
>> whether the qemu hw can support the 3D renderer, dependant on whether
>> it was linked with the virgl renderer and whether the current UI was
>> GL capable.
>
> Hmm, why does it depend on the UI?  Wasn't the plan to render into a
> dma-buf no matter what?  Then either read the rendered result from the
> dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
> to the compositor?

That would be the hopeful plan, however so far my brief investigation says
I'm possibly being a bit naive with what EGL can do. I'm still talking to the
EGL and wayland people about how best to model this, but either way
this won't work with nvidia drivers which is a case we need to handle, so
we need to interact between the UI GL usage and the renderer. Also
non-Linux platforms would want this in some way I'd assume, at least
so virtio-gpu is usable with qemu on them.

I've started looking at how to integrate GL with the gtk frontend as well.

> Also note that the virtio-gpu gl-capability needs to be configurable for
> live migration reasons, so you can migrate between hosts with different
> 3d capabilities.  Something like -device
> virtio-gpu,gl={none,gl2,gl3,host} where "none" turns it off, "gl
> $version" specifies the gl support level and "host" makes it depend on
> the host capabilities (simliar to -cpu host).  For starters you can
> leave out "host" and depend on the user set this.

GL isn't that simple, and I'm not sure I can make it that simple unfortunately,
the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
live migration with none might be the first answer, and then we'd have to expend
serious effort on making live migration work for any sort of different
GL drivers.
Reading everything back while renderering continues could be a lot of
fun. (or pain).

>> So is there a method to modify the advertised feature
>> bits later in the setup sequence before the guest is started?
>
> You can register a callback to be notified when the guest is
> started/stopped (qemu_add_vm_change_state_handler).  That could be used
> although it is a bit hackish.

I don't think this will let me change the feature bits though since the virtio
PCI layer has already picked them up I think. I just wondered if we have any
examples of changing features later.

>
> PS: Now that 1.7 is out of the door and 2.0 tree is open for development
>     we should start getting the bits which are ready merged to make your
>     patch backlog smaller.  SDL2 would be a good start I think.
>

I should probably resubmit the multi-head changes and SDL2 changes and
we should look at merging them first. The input thing is kind off up in the
air, we could probably just default to using hints from the video setup,
and move towards having an agent do it properly in the guest.

I've just spent a week reintegrating virtio-gpu with the 3D renderer so I can
make sure I haven't backed myself into a corner, it kinda leaves 3 major things
outstanding:

a) dma-buf/EGL, EGLimage vs EGLstream, nothing exists upstream, so
unknown timeframe.
I don't think we should block merging on this, also dma-buf doesn't
exist on Windows/MacOSX
so qemu there should still get virtio-gpu available.

b) dataplane - I'd really like this, the renderer is a lot slower when
its not in a thread, and it
looks bad on benchmarks. I expect other stuff needs to happen before this.

c) GTK multi-head + GL support - I'd like to have the GTK UI be able
for multi-head as well
my first attempt moved a lot of code around, I'm not really sure what
the secondary head
windows should contain vs the primary head.

Dave.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] gpu and console chicken and egg
  2013-12-04 21:44   ` Dave Airlie
@ 2013-12-05  8:52     ` Gerd Hoffmann
  2013-12-06  2:24       ` Dave Airlie
  0 siblings, 1 reply; 5+ messages in thread
From: Gerd Hoffmann @ 2013-12-05  8:52 UTC (permalink / raw)
  To: Dave Airlie; +Cc: qemu-devel@nongnu.org

  Hi,

> > Hmm, why does it depend on the UI?  Wasn't the plan to render into a
> > dma-buf no matter what?  Then either read the rendered result from the
> > dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
> > to the compositor?
> 
> That would be the hopeful plan, however so far my brief investigation says
> I'm possibly being a bit naive with what EGL can do. I'm still talking to the
> EGL and wayland people about how best to model this, but either way
> this won't work with nvidia drivers which is a case we need to handle, so
> we need to interact between the UI GL usage and the renderer.

Hmm.  That implies we simply can't combine hardware-accelerated 3d
rendering with vnc, correct?

> Also
> non-Linux platforms would want this in some way I'd assume, at least
> so virtio-gpu is usable with qemu on them.

Yes, the non-3d part should have no linux dependency and should be
available on all platforms.

> GL isn't that simple, and I'm not sure I can make it that simple unfortunately,
> the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
> live migration with none might be the first answer, and then we'd have to expend
> serious effort on making live migration work for any sort of different
> GL drivers.
> Reading everything back while renderering continues could be a lot of
> fun. (or pain).

We probably want to start with gl={none,host} then.  Live migration only
supported with "none".

If we can't combine remote displays with 3d rendering (nvidia issue
above) live migration with 3d makes little sense anyway.

> I don't think this will let me change the feature bits though since the virtio
> PCI layer has already picked them up I think. I just wondered if we have any
> examples of changing features later.

I think you can.  There are no helper functions for it though, you
probably have to walk the data structures and fiddle with the bits
directly.

Maybe it is easier to just have a command line option to enable/disable
3d globally, and a global variable with the 3d status.  Being able to
turn off all 3d is probably useful anyway.  Either as standalone option
or as display option (i.e. -display sdl,3d={on,off,auto}).  Then do a
simple check for 3d availability when *parsing* the options.  That'll
also remove the need for the 3d option for virtio-gpu, it can just check
the global flag instead.

> I should probably resubmit the multi-head changes and SDL2 changes and
> we should look at merging them first.

Yes.

> a) dma-buf/EGL, EGLimage vs EGLstream, nothing exists upstream, so
> unknown timeframe.
> I don't think we should block merging on this, also dma-buf doesn't
> exist on Windows/MacOSX
> so qemu there should still get virtio-gpu available.

Yes.  Merging virtio-gpu with 2d should not wait for 3d being finally
sorted.  3d is too much of a moving target still.

> c) GTK multi-head + GL support - I'd like to have the GTK UI be able
> for multi-head as well
> my first attempt moved a lot of code around, I'm not really sure what
> the secondary head
> windows should contain vs the primary head.

Yes, the multihead UI design is the tricky part here.  I'd say don't try
to make the first draft too fancy.  I expect we will have quite some
discussions on that topic.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] gpu and console chicken and egg
  2013-12-05  8:52     ` Gerd Hoffmann
@ 2013-12-06  2:24       ` Dave Airlie
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Airlie @ 2013-12-06  2:24 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: qemu-devel@nongnu.org

On Thu, Dec 5, 2013 at 6:52 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
>   Hi,
>
>> > Hmm, why does it depend on the UI?  Wasn't the plan to render into a
>> > dma-buf no matter what?  Then either read the rendered result from the
>> > dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
>> > to the compositor?
>>
>> That would be the hopeful plan, however so far my brief investigation says
>> I'm possibly being a bit naive with what EGL can do. I'm still talking to the
>> EGL and wayland people about how best to model this, but either way
>> this won't work with nvidia drivers which is a case we need to handle, so
>> we need to interact between the UI GL usage and the renderer.
>
> Hmm.  That implies we simply can't combine hardware-accelerated 3d
> rendering with vnc, correct?

For SDL + spice/vnc I've added a readback capabilty to the renderer,
and hooked things up if there is > 1 DisplayChangeListener then it'll
do readbacks, and keep the surface updated, this slows things down,
but it does work.

but yes it means we can't just run the qemu process in its sandbox
without a connection to the X server for it to do GL rendering, or
without using SDL,

I don't think we should block merging the initial code on this, it was
always a big problem on its own that needed solving.

>> GL isn't that simple, and I'm not sure I can make it that simple unfortunately,
>> the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
>> live migration with none might be the first answer, and then we'd have to expend
>> serious effort on making live migration work for any sort of different
>> GL drivers.
>> Reading everything back while renderering continues could be a lot of
>> fun. (or pain).
>
> We probably want to start with gl={none,host} then.  Live migration only
> supported with "none".
>
> If we can't combine remote displays with 3d rendering (nvidia issue
> above) live migration with 3d makes little sense anyway.

Well we can, we just can't do it without also having a local display
connection, but yes it does limit the migration capabilities quite a
lot!

>> I don't think this will let me change the feature bits though since the virtio
>> PCI layer has already picked them up I think. I just wondered if we have any
>> examples of changing features later.
>
> I think you can.  There are no helper functions for it though, you
> probably have to walk the data structures and fiddle with the bits
> directly.
>
> Maybe it is easier to just have a command line option to enable/disable
> 3d globally, and a global variable with the 3d status.  Being able to
> turn off all 3d is probably useful anyway.  Either as standalone option
> or as display option (i.e. -display sdl,3d={on,off,auto}).  Then do a
> simple check for 3d availability when *parsing* the options.  That'll
> also remove the need for the 3d option for virtio-gpu, it can just check
> the global flag instead.

Ah yes that might work, and just fail if we request 3D but can't fulfil it.
>
>> I should probably resubmit the multi-head changes and SDL2 changes and
>> we should look at merging them first.
>

I've got some outstanding things to redo on the virtio-gpu/vga bits, then I'll
resubmit the sdl2 and unaccelerated virtio-gpu bits.

Dave.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-12-06  2:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-04  7:02 [Qemu-devel] gpu and console chicken and egg Dave Airlie
2013-12-04  8:23 ` Gerd Hoffmann
2013-12-04 21:44   ` Dave Airlie
2013-12-05  8:52     ` Gerd Hoffmann
2013-12-06  2:24       ` Dave Airlie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).