* [PATCH v2 1/1] drm/virtio: Implement device_attach @ 2024-01-29 10:31 Julia Zhang 2024-01-29 13:12 ` Christian König 2024-01-30 11:10 ` Daniel Vetter 0 siblings, 2 replies; 6+ messages in thread From: Julia Zhang @ 2024-01-29 10:31 UTC (permalink / raw) To: Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel, dri-devel, amd-gfx, virtualization Cc: Alex Deucher, Christian König, Daniel Vetter, David Airlie, Erik Faye-Lund, Marek Olšák, Pierre-Eric Pelloux-Prayer, Honglei Huang, Chen Jiqian, Huang Rui, Julia Zhang As vram objects don't have backing pages and thus can't implement drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() and implement virtgpu specific map/unmap/attach callbacks to support both of shmem objects and vram objects. Signed-off-by: Julia Zhang <julia.zhang@amd.com> --- drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 44425f20d91a..b490a5343b06 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, { struct drm_gem_object *obj = attach->dmabuf->priv; struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + struct sg_table *sgt; + int ret; if (virtio_gpu_is_vram(bo)) return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); - return drm_gem_map_dma_buf(attach, dir); + sgt = drm_prime_pages_to_sg(obj->dev, + to_drm_gem_shmem_obj(obj)->pages, + obj->size >> PAGE_SHIFT); + if (IS_ERR(sgt)) + return sgt; + + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); + if (ret) { + sg_free_table(sgt); + kfree(sgt); + return ERR_PTR(ret); + } + + return sgt; } static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, struct drm_gem_object *obj = attach->dmabuf->priv; struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + if (!sgt) + return; + if (virtio_gpu_is_vram(bo)) { virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); - return; + } else { + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); + sg_free_table(sgt); + kfree(sgt); } +} + +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, + struct dma_buf_attachment *attach) +{ + struct drm_gem_object *obj = attach->dmabuf->priv; + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + int ret = 0; + + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) + ret = obj->funcs->pin(obj); - drm_gem_unmap_dma_buf(attach, sgt, dir); + return ret; } static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, }, - .device_attach = drm_gem_map_attach, + .device_attach = virtgpu_gem_device_attach, .get_uuid = virtgpu_virtio_get_uuid, }; -- 2.34.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/1] drm/virtio: Implement device_attach 2024-01-29 10:31 [PATCH v2 1/1] drm/virtio: Implement device_attach Julia Zhang @ 2024-01-29 13:12 ` Christian König 2024-01-30 11:10 ` Daniel Vetter 1 sibling, 0 replies; 6+ messages in thread From: Christian König @ 2024-01-29 13:12 UTC (permalink / raw) To: Julia Zhang, Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel, dri-devel, amd-gfx, virtualization Cc: Alex Deucher, Daniel Vetter, David Airlie, Erik Faye-Lund, Marek Olšák, Pierre-Eric Pelloux-Prayer, Honglei Huang, Chen Jiqian, Huang Rui Am 29.01.24 um 11:31 schrieb Julia Zhang: > As vram objects don't have backing pages and thus can't implement > drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf > callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() > and implement virtgpu specific map/unmap/attach callbacks to support > both of shmem objects and vram objects. > > Signed-off-by: Julia Zhang <julia.zhang@amd.com> I need to find more time to look into the code, but of hand I would say that this is the correct solution. Regards, Christian. > --- > drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- > 1 file changed, 36 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c > index 44425f20d91a..b490a5343b06 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_prime.c > +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c > @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, > { > struct drm_gem_object *obj = attach->dmabuf->priv; > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > + struct sg_table *sgt; > + int ret; > > if (virtio_gpu_is_vram(bo)) > return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); > > - return drm_gem_map_dma_buf(attach, dir); > + sgt = drm_prime_pages_to_sg(obj->dev, > + to_drm_gem_shmem_obj(obj)->pages, > + obj->size >> PAGE_SHIFT); > + if (IS_ERR(sgt)) > + return sgt; > + > + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > + if (ret) { > + sg_free_table(sgt); > + kfree(sgt); > + return ERR_PTR(ret); > + } > + > + return sgt; > } > > static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > struct drm_gem_object *obj = attach->dmabuf->priv; > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > + if (!sgt) > + return; > + > if (virtio_gpu_is_vram(bo)) { > virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); > - return; > + } else { > + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > + sg_free_table(sgt); > + kfree(sgt); > } > +} > + > +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, > + struct dma_buf_attachment *attach) > +{ > + struct drm_gem_object *obj = attach->dmabuf->priv; > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > + int ret = 0; > + > + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) > + ret = obj->funcs->pin(obj); > > - drm_gem_unmap_dma_buf(attach, sgt, dir); > + return ret; > } > > static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > .vmap = drm_gem_dmabuf_vmap, > .vunmap = drm_gem_dmabuf_vunmap, > }, > - .device_attach = drm_gem_map_attach, > + .device_attach = virtgpu_gem_device_attach, > .get_uuid = virtgpu_virtio_get_uuid, > }; > ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/1] drm/virtio: Implement device_attach 2024-01-29 10:31 [PATCH v2 1/1] drm/virtio: Implement device_attach Julia Zhang 2024-01-29 13:12 ` Christian König @ 2024-01-30 11:10 ` Daniel Vetter 2024-01-30 11:16 ` Daniel Vetter 1 sibling, 1 reply; 6+ messages in thread From: Daniel Vetter @ 2024-01-30 11:10 UTC (permalink / raw) To: Julia Zhang Cc: Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel, dri-devel, amd-gfx, virtualization, Alex Deucher, Christian König, Daniel Vetter, David Airlie, Erik Faye-Lund, Marek Olšák, Pierre-Eric Pelloux-Prayer, Honglei Huang, Chen Jiqian, Huang Rui On Mon, Jan 29, 2024 at 06:31:19PM +0800, Julia Zhang wrote: > As vram objects don't have backing pages and thus can't implement > drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf > callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() > and implement virtgpu specific map/unmap/attach callbacks to support > both of shmem objects and vram objects. > > Signed-off-by: Julia Zhang <julia.zhang@amd.com> > --- > drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- > 1 file changed, 36 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c > index 44425f20d91a..b490a5343b06 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_prime.c > +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c > @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, > { > struct drm_gem_object *obj = attach->dmabuf->priv; > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > + struct sg_table *sgt; > + int ret; > > if (virtio_gpu_is_vram(bo)) > return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); > > - return drm_gem_map_dma_buf(attach, dir); > + sgt = drm_prime_pages_to_sg(obj->dev, > + to_drm_gem_shmem_obj(obj)->pages, > + obj->size >> PAGE_SHIFT); > + if (IS_ERR(sgt)) > + return sgt; > + > + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > + if (ret) { > + sg_free_table(sgt); > + kfree(sgt); > + return ERR_PTR(ret); > + } > + > + return sgt; > } > > static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > struct drm_gem_object *obj = attach->dmabuf->priv; > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > + if (!sgt) > + return; > + > if (virtio_gpu_is_vram(bo)) { > virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); > - return; > + } else { > + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > + sg_free_table(sgt); > + kfree(sgt); > } > +} > + > +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, > + struct dma_buf_attachment *attach) > +{ > + struct drm_gem_object *obj = attach->dmabuf->priv; > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > + int ret = 0; > + > + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) > + ret = obj->funcs->pin(obj); > > - drm_gem_unmap_dma_buf(attach, sgt, dir); > + return ret; This doesn't look like what I've expected. There should be no need to change the map/unmap functions, especially not for the usual gem bo case. We should definitely keep using the exact same code for that. Instead all I expected is roughly virtgpu_gem_device_attach() { if (virtio_gpu_is_vram(bo)) { if (can_access_virtio_vram_directly(attach->dev) return 0; else return -EBUSY; } else { return drm_gem_map_attach(); } } Note that I think can_access_virtio_vram_directly() needs to be implemented first. I'm not even sure it's possible, might be that all the importers need to set the attachment->peer2peer flag. Which is why this thing exists really. But that's a pile more work to do. Frankly the more I look at the original patch that added vram export support the more this just looks like a "pls revert, this is just too broken". We should definitely not open-code any functions for the gem_bo export case, which your patch seems to do? Or maybe I'm just extremely confused. -Sima > > static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > .vmap = drm_gem_dmabuf_vmap, > .vunmap = drm_gem_dmabuf_vunmap, > }, > - .device_attach = drm_gem_map_attach, > + .device_attach = virtgpu_gem_device_attach, > .get_uuid = virtgpu_virtio_get_uuid, > }; > > -- > 2.34.1 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/1] drm/virtio: Implement device_attach 2024-01-30 11:10 ` Daniel Vetter @ 2024-01-30 11:16 ` Daniel Vetter 2024-01-30 14:23 ` Christian König 0 siblings, 1 reply; 6+ messages in thread From: Daniel Vetter @ 2024-01-30 11:16 UTC (permalink / raw) To: Julia Zhang, Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel, dri-devel, amd-gfx, virtualization, Alex Deucher, Christian König, David Airlie, Erik Faye-Lund, Marek Olšák, Pierre-Eric Pelloux-Prayer, Honglei Huang, Chen Jiqian, Huang Rui, David Stevens On Tue, Jan 30, 2024 at 12:10:31PM +0100, Daniel Vetter wrote: > On Mon, Jan 29, 2024 at 06:31:19PM +0800, Julia Zhang wrote: > > As vram objects don't have backing pages and thus can't implement > > drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf > > callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() > > and implement virtgpu specific map/unmap/attach callbacks to support > > both of shmem objects and vram objects. > > > > Signed-off-by: Julia Zhang <julia.zhang@amd.com> > > --- > > drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- > > 1 file changed, 36 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c > > index 44425f20d91a..b490a5343b06 100644 > > --- a/drivers/gpu/drm/virtio/virtgpu_prime.c > > +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c > > @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, > > { > > struct drm_gem_object *obj = attach->dmabuf->priv; > > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > + struct sg_table *sgt; > > + int ret; > > > > if (virtio_gpu_is_vram(bo)) > > return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); > > > > - return drm_gem_map_dma_buf(attach, dir); > > + sgt = drm_prime_pages_to_sg(obj->dev, > > + to_drm_gem_shmem_obj(obj)->pages, > > + obj->size >> PAGE_SHIFT); > > + if (IS_ERR(sgt)) > > + return sgt; > > + > > + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > > + if (ret) { > > + sg_free_table(sgt); > > + kfree(sgt); > > + return ERR_PTR(ret); > > + } > > + > > + return sgt; > > } > > > > static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > > @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > > struct drm_gem_object *obj = attach->dmabuf->priv; > > struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > > > + if (!sgt) > > + return; > > + > > if (virtio_gpu_is_vram(bo)) { > > virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); > > - return; > > + } else { > > + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); > > + sg_free_table(sgt); > > + kfree(sgt); > > } > > +} > > + > > +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, > > + struct dma_buf_attachment *attach) > > +{ > > + struct drm_gem_object *obj = attach->dmabuf->priv; > > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > + int ret = 0; > > + > > + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) > > + ret = obj->funcs->pin(obj); > > > > - drm_gem_unmap_dma_buf(attach, sgt, dir); > > + return ret; > > This doesn't look like what I've expected. There should be no need to > change the map/unmap functions, especially not for the usual gem bo case. > We should definitely keep using the exact same code for that. Instead all > I expected is roughly > > virtgpu_gem_device_attach() > { > if (virtio_gpu_is_vram(bo)) { > if (can_access_virtio_vram_directly(attach->dev) > return 0; > else > return -EBUSY; > } else { > return drm_gem_map_attach(); > } > } > > Note that I think can_access_virtio_vram_directly() needs to be > implemented first. I'm not even sure it's possible, might be that all the > importers need to set the attachment->peer2peer flag. Which is why this > thing exists really. But that's a pile more work to do. > > Frankly the more I look at the original patch that added vram export > support the more this just looks like a "pls revert, this is just too > broken". The commit I mean is this one: ea5ea3d8a117 ("drm/virtio: support mapping exported vram"). The commit message definitely needs to cite that one, and also needs a cc: stable because not rejecting invalid imports is a pretty big deal. Also adding David. -Sima > > We should definitely not open-code any functions for the gem_bo export > case, which your patch seems to do? Or maybe I'm just extremely confused. > -Sima > > > > > static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > > @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { > > .vmap = drm_gem_dmabuf_vmap, > > .vunmap = drm_gem_dmabuf_vunmap, > > }, > > - .device_attach = drm_gem_map_attach, > > + .device_attach = virtgpu_gem_device_attach, > > .get_uuid = virtgpu_virtio_get_uuid, > > }; > > > > -- > > 2.34.1 > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/1] drm/virtio: Implement device_attach 2024-01-30 11:16 ` Daniel Vetter @ 2024-01-30 14:23 ` Christian König 2024-01-31 10:20 ` Zhang, Julia 0 siblings, 1 reply; 6+ messages in thread From: Christian König @ 2024-01-30 14:23 UTC (permalink / raw) To: Julia Zhang, Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel, dri-devel, amd-gfx, virtualization, Alex Deucher, David Airlie, Erik Faye-Lund, Marek Olšák, Pierre-Eric Pelloux-Prayer, Honglei Huang, Chen Jiqian, Huang Rui, David Stevens Am 30.01.24 um 12:16 schrieb Daniel Vetter: > On Tue, Jan 30, 2024 at 12:10:31PM +0100, Daniel Vetter wrote: >> On Mon, Jan 29, 2024 at 06:31:19PM +0800, Julia Zhang wrote: >>> As vram objects don't have backing pages and thus can't implement >>> drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf >>> callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() >>> and implement virtgpu specific map/unmap/attach callbacks to support >>> both of shmem objects and vram objects. >>> >>> Signed-off-by: Julia Zhang <julia.zhang@amd.com> >>> --- >>> drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- >>> 1 file changed, 36 insertions(+), 4 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c >>> index 44425f20d91a..b490a5343b06 100644 >>> --- a/drivers/gpu/drm/virtio/virtgpu_prime.c >>> +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c >>> @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, >>> { >>> struct drm_gem_object *obj = attach->dmabuf->priv; >>> struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>> + struct sg_table *sgt; >>> + int ret; >>> >>> if (virtio_gpu_is_vram(bo)) >>> return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); >>> >>> - return drm_gem_map_dma_buf(attach, dir); >>> + sgt = drm_prime_pages_to_sg(obj->dev, >>> + to_drm_gem_shmem_obj(obj)->pages, >>> + obj->size >> PAGE_SHIFT); >>> + if (IS_ERR(sgt)) >>> + return sgt; >>> + >>> + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); >>> + if (ret) { >>> + sg_free_table(sgt); >>> + kfree(sgt); >>> + return ERR_PTR(ret); >>> + } >>> + >>> + return sgt; >>> } >>> >>> static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, >>> @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, >>> struct drm_gem_object *obj = attach->dmabuf->priv; >>> struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>> >>> + if (!sgt) >>> + return; >>> + >>> if (virtio_gpu_is_vram(bo)) { >>> virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); >>> - return; >>> + } else { >>> + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); >>> + sg_free_table(sgt); >>> + kfree(sgt); >>> } >>> +} >>> + >>> +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, >>> + struct dma_buf_attachment *attach) >>> +{ >>> + struct drm_gem_object *obj = attach->dmabuf->priv; >>> + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>> + int ret = 0; >>> + >>> + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) >>> + ret = obj->funcs->pin(obj); >>> >>> - drm_gem_unmap_dma_buf(attach, sgt, dir); >>> + return ret; >> This doesn't look like what I've expected. There should be no need to >> change the map/unmap functions, especially not for the usual gem bo case. >> We should definitely keep using the exact same code for that. Instead all >> I expected is roughly >> >> virtgpu_gem_device_attach() >> { >> if (virtio_gpu_is_vram(bo)) { >> if (can_access_virtio_vram_directly(attach->dev) >> return 0; >> else >> return -EBUSY; >> } else { >> return drm_gem_map_attach(); >> } >> } >> >> Note that I think can_access_virtio_vram_directly() needs to be >> implemented first. I'm not even sure it's possible, might be that all the >> importers need to set the attachment->peer2peer flag. Which is why this >> thing exists really. But that's a pile more work to do. Yeah, that is really just speculative. All importers need to set the peer2peer flag just in case. What happens under the hood is that IOMMU redirects the "VRAM" memory access to whatever address the DMA-buf on the host is pointing to (system, VRAM, doorbell, IOMMU, whatever). I'm also not 100% sure if all the cache snooping is done correctly in all cases, but for now it seems to work. >> >> Frankly the more I look at the original patch that added vram export >> support the more this just looks like a "pls revert, this is just too >> broken". > The commit I mean is this one: ea5ea3d8a117 ("drm/virtio: support mapping > exported vram"). The commit message definitely needs to cite that one, and > also needs a cc: stable because not rejecting invalid imports is a pretty > big deal. Yeah, I've pointed out that commit in an internal discussion as well. I was just not aware that it's that severely broken. Regards, Christian. > > Also adding David. > -Sima > >> We should definitely not open-code any functions for the gem_bo export >> case, which your patch seems to do? Or maybe I'm just extremely confused. >> -Sima >> >>> >>> static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { >>> @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { >>> .vmap = drm_gem_dmabuf_vmap, >>> .vunmap = drm_gem_dmabuf_vunmap, >>> }, >>> - .device_attach = drm_gem_map_attach, >>> + .device_attach = virtgpu_gem_device_attach, >>> .get_uuid = virtgpu_virtio_get_uuid, >>> }; >>> >>> -- >>> 2.34.1 >>> >> -- >> Daniel Vetter >> Software Engineer, Intel Corporation >> http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/1] drm/virtio: Implement device_attach 2024-01-30 14:23 ` Christian König @ 2024-01-31 10:20 ` Zhang, Julia 0 siblings, 0 replies; 6+ messages in thread From: Zhang, Julia @ 2024-01-31 10:20 UTC (permalink / raw) To: Koenig, Christian, Zhang, Julia, Gurchetan Singh, Chia-I Wu, David Airlie, Gerd Hoffmann, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org, Deucher, Alexander, David Airlie, Erik Faye-Lund, Olsak, Marek, Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Chen, Jiqian, Huang, Ray, David Stevens On 2024/1/30 22:23, Christian König wrote: > Am 30.01.24 um 12:16 schrieb Daniel Vetter: >> On Tue, Jan 30, 2024 at 12:10:31PM +0100, Daniel Vetter wrote: >>> On Mon, Jan 29, 2024 at 06:31:19PM +0800, Julia Zhang wrote: >>>> As vram objects don't have backing pages and thus can't implement >>>> drm_gem_object_funcs.get_sg_table callback. This removes drm dma-buf >>>> callbacks in virtgpu_gem_map_dma_buf()/virtgpu_gem_unmap_dma_buf() >>>> and implement virtgpu specific map/unmap/attach callbacks to support >>>> both of shmem objects and vram objects. >>>> >>>> Signed-off-by: Julia Zhang <julia.zhang@amd.com> >>>> --- >>>> drivers/gpu/drm/virtio/virtgpu_prime.c | 40 +++++++++++++++++++++++--- >>>> 1 file changed, 36 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c >>>> index 44425f20d91a..b490a5343b06 100644 >>>> --- a/drivers/gpu/drm/virtio/virtgpu_prime.c >>>> +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c >>>> @@ -49,11 +49,26 @@ virtgpu_gem_map_dma_buf(struct dma_buf_attachment *attach, >>>> { >>>> struct drm_gem_object *obj = attach->dmabuf->priv; >>>> struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>>> + struct sg_table *sgt; >>>> + int ret; >>>> if (virtio_gpu_is_vram(bo)) >>>> return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir); >>>> - return drm_gem_map_dma_buf(attach, dir); >>>> + sgt = drm_prime_pages_to_sg(obj->dev, >>>> + to_drm_gem_shmem_obj(obj)->pages, >>>> + obj->size >> PAGE_SHIFT); >>>> + if (IS_ERR(sgt)) >>>> + return sgt; >>>> + >>>> + ret = dma_map_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); >>>> + if (ret) { >>>> + sg_free_table(sgt); >>>> + kfree(sgt); >>>> + return ERR_PTR(ret); >>>> + } >>>> + >>>> + return sgt; >>>> } >>>> static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, >>>> @@ -63,12 +78,29 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach, >>>> struct drm_gem_object *obj = attach->dmabuf->priv; >>>> struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>>> + if (!sgt) >>>> + return; >>>> + >>>> if (virtio_gpu_is_vram(bo)) { >>>> virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir); >>>> - return; >>>> + } else { >>>> + dma_unmap_sgtable(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); >>>> + sg_free_table(sgt); >>>> + kfree(sgt); >>>> } >>>> +} >>>> + >>>> +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf, >>>> + struct dma_buf_attachment *attach) >>>> +{ >>>> + struct drm_gem_object *obj = attach->dmabuf->priv; >>>> + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); >>>> + int ret = 0; >>>> + >>>> + if (!virtio_gpu_is_vram(bo) && obj->funcs->pin) >>>> + ret = obj->funcs->pin(obj); >>>> - drm_gem_unmap_dma_buf(attach, sgt, dir); >>>> + return ret; >>> This doesn't look like what I've expected. There should be no need to >>> change the map/unmap functions, especially not for the usual gem bo case. >>> We should definitely keep using the exact same code for that. Instead all >>> I expected is roughly >>> >>> virtgpu_gem_device_attach() >>> { >>> if (virtio_gpu_is_vram(bo)) { >>> if (can_access_virtio_vram_directly(attach->dev) >>> return 0; >>> else >>> return -EBUSY; >>> } else { >>> return drm_gem_map_attach(); >>> } >>> } >>> >>> Note that I think can_access_virtio_vram_directly() needs to be >>> implemented first. I'm not even sure it's possible, might be that all the >>> importers need to set the attachment->peer2peer flag. Which is why this >>> thing exists really. But that's a pile more work to do. > Hi Sima, Christian, > Yeah, that is really just speculative. All importers need to set the peer2peer flag just in case. I see, I will modify this. > > What happens under the hood is that IOMMU redirects the "VRAM" memory access to whatever address the DMA-buf on the host is pointing to (system, VRAM, doorbell, IOMMU, whatever). > > I'm also not 100% sure if all the cache snooping is done correctly in all cases, but for now it seems to work. >>>> >>> Frankly the more I look at the original patch that added vram export >>> support the more this just looks like a "pls revert, this is just too >>> broken". >> The commit I mean is this one: ea5ea3d8a117 ("drm/virtio: support mapping >> exported vram"). The commit message definitely needs to cite that one, and >> also needs a cc: stable because not rejecting invalid imports is a pretty >> big deal. > > Yeah, I've pointed out that commit in an internal discussion as well. I was just not aware that it's that severely broken. > Yeah we have mentioned this patch before, but I don't totally understand why this is too broken. Without exporting vram objects, dGPU prime feature would not be realized. Would you mind to explain more about it. Thanks! Best regards, Julia > Regards, > Christian. > >> >> Also adding David. >> -Sima >> >>> We should definitely not open-code any functions for the gem_bo export >>> case, which your patch seems to do? Or maybe I'm just extremely confused. >>> -Sima >>> >>>> static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { >>>> @@ -83,7 +115,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops = { >>>> .vmap = drm_gem_dmabuf_vmap, >>>> .vunmap = drm_gem_dmabuf_vunmap, >>>> }, >>>> - .device_attach = drm_gem_map_attach, >>>> + .device_attach = virtgpu_gem_device_attach, >>>> .get_uuid = virtgpu_virtio_get_uuid, >>>> }; >>>> -- >>>> 2.34.1 >>>> >>> -- >>> Daniel Vetter >>> Software Engineer, Intel Corporation >>> http://blog.ffwll.ch > ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-01-31 10:20 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-01-29 10:31 [PATCH v2 1/1] drm/virtio: Implement device_attach Julia Zhang 2024-01-29 13:12 ` Christian König 2024-01-30 11:10 ` Daniel Vetter 2024-01-30 11:16 ` Daniel Vetter 2024-01-30 14:23 ` Christian König 2024-01-31 10:20 ` Zhang, Julia
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).