qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Julia Zhang <julia.zhang@amd.com>
To: Gerd Hoffmann <kraxel@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	 Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Antonio Caggiano <antonio.caggiano@collabora.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Robert Beckett <bob.beckett@collabora.com>,
	<qemu-devel@nongnu.org>
Cc: xen-devel@lists.xenproject.org,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Xenia Ragiadakou" <burzalodowa@gmail.com>,
	"Honglei Huang" <honglei1.huang@amd.com>,
	"Julia Zhang" <julia.zhang@amd.com>,
	"Chen Jiqian" <Jiqian.Chen@amd.com>
Subject: [PATCH v2 0/1]  Implementation of resource_query_layout
Date: Thu, 21 Dec 2023 18:23:41 +0800	[thread overview]
Message-ID: <20231221102342.4022630-1-julia.zhang@amd.com> (raw)

Hi all,

Sorry to late reply. This is v2 of the implementation of
resource_query_layout. This adds a new ioctl to let guest query information
of host resource, which is originally from Daniel Stone. We add some
changes to support query the correct stride of host resource before it's
created, which is to support dGPU prime feature. Without correct stride,
dGPU can not blit data to virtio iGPU correctly. 

Changes from v1 to v2:
-Squash two patches to a single patch. 
-A small modification of VIRTIO_GPU_F_RESOURCE_QUERY_LAYOUT


Below is description of v1:
This add implementation of resource_query_layout to get the information of
how the host has actually allocated the buffer. This function is now used
to query the stride for guest linear resource for dGPU prime on guest VMs.

v1 of qemu side:
https:
//lore.kernel.org/qemu-devel/20231110074027.24862-1-julia.zhang@amd.com/T/#t

v1 of kernel side:
 https:
//lore.kernel.org/xen-devel/20231110074027.24862-1-julia.zhang@amd.com/T/#t

Daniel Stone (1):
  virgl: Implement resource_query_layout

 hw/display/virtio-gpu-base.c                |  4 +++
 hw/display/virtio-gpu-virgl.c               | 40 +++++++++++++++++++++
 include/hw/virtio/virtio-gpu-bswap.h        |  7 ++++
 include/standard-headers/linux/virtio_gpu.h | 30 ++++++++++++++++
 meson.build                                 |  4 +++
 5 files changed, 85 insertions(+)

-- 
2.34.1



             reply	other threads:[~2023-12-21 10:30 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-21 10:23 Julia Zhang [this message]
2023-12-21 10:23 ` [PATCH v2 1/1] virgl: Implement resource_query_layout Julia Zhang
2024-01-02 13:39   ` Marc-André Lureau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231221102342.4022630-1-julia.zhang@amd.com \
    --to=julia.zhang@amd.com \
    --cc=Jiqian.Chen@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=anthony.perard@citrix.com \
    --cc=antonio.caggiano@collabora.com \
    --cc=bob.beckett@collabora.com \
    --cc=burzalodowa@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=dgilbert@redhat.com \
    --cc=honglei1.huang@amd.com \
    --cc=kraxel@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).