* Re: [PATCH v2 7/8] drm/virtio: Support memory shrinking [not found] ` <20220314224253.236359-8-dmitry.osipenko@collabora.com> @ 2022-03-15 12:43 ` Emil Velikov 0 siblings, 0 replies; 6+ messages in thread From: Emil Velikov @ 2022-03-15 12:43 UTC (permalink / raw) To: Dmitry Osipenko Cc: Rob Herring, Gert Wollny, Tomeu Vizoso, Gustavo Padovan, David Airlie, Thomas Zimmermann, ML dri-devel, Maarten Lankhorst, Linux-Kernel@Vger. Kernel. Org, Maxime Ripard, Gurchetan Singh, Daniel Vetter, Dmitry Osipenko, Steven Price, open list:VIRTIO GPU DRIVER, Chia-I Wu, Alyssa Rosenzweig Greetings everyone, Food for thought: Would it make sense to have the madvise ioctl as generic DRM one? Looking around - i915, msm & panfrost already have one and the virtio implementation [below] seems as generic as it gets. On Mon, 14 Mar 2022 at 22:44, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > +#define VIRTGPU_MADV_WILLNEED 0 > +#define VIRTGPU_MADV_DONTNEED 1 > +struct drm_virtgpu_madvise { > + __u32 bo_handle; > + __u32 retained; /* out, non-zero if BO can be used */ > + __u32 madv; > + __u32 pad; This seems to be based on panfrost/msm yet names (bo_handle vs handle), layout and documentation varies. Why is that - copy/paste is cheap :-P HTH -Emil _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 0/8] Add memory shrinker to VirtIO-GPU DRM driver [not found] <20220314224253.236359-1-dmitry.osipenko@collabora.com> [not found] ` <20220314224253.236359-8-dmitry.osipenko@collabora.com> @ 2022-03-15 12:47 ` Emil Velikov [not found] ` <20220314224253.236359-5-dmitry.osipenko@collabora.com> [not found] ` <20220314224253.236359-7-dmitry.osipenko@collabora.com> 3 siblings, 0 replies; 6+ messages in thread From: Emil Velikov @ 2022-03-15 12:47 UTC (permalink / raw) To: Dmitry Osipenko Cc: Rob Herring, Gert Wollny, Tomeu Vizoso, Gustavo Padovan, David Airlie, Thomas Zimmermann, ML dri-devel, Maarten Lankhorst, Linux-Kernel@Vger. Kernel. Org, Maxime Ripard, Gurchetan Singh, Daniel Vetter, Dmitry Osipenko, Steven Price, open list:VIRTIO GPU DRIVER, Chia-I Wu, Alyssa Rosenzweig On Mon, 14 Mar 2022 at 22:44, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > Dmitry Osipenko (8): > drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling > drm/virtio: Check whether transferred 2D BO is shmem > drm/virtio: Unlock GEM reservations in error code path These three are legitimate fixes that we want regardless of the shrinker. Please add the respective "Fixes" tag, so they find their way in the stable trees. With that, them 3 are: Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com> HTH Emil _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20220314224253.236359-5-dmitry.osipenko@collabora.com>]
* Re: [PATCH v2 4/8] drm/virtio: Improve DMA API usage for shmem BOs [not found] ` <20220314224253.236359-5-dmitry.osipenko@collabora.com> @ 2022-03-16 12:41 ` Robin Murphy 0 siblings, 0 replies; 6+ messages in thread From: Robin Murphy @ 2022-03-16 12:41 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig Cc: Gustavo Padovan, Dmitry Osipenko, linux-kernel, dri-devel, virtualization On 2022-03-14 22:42, Dmitry Osipenko wrote: > DRM API requires the DRM's driver to be backed with the device that can > be used for generic DMA operations. The VirtIO-GPU device can't perform > DMA operations if it uses PCI transport because PCI device driver creates > a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's > GPU device for the DRM's device instead of the VirtIO-GPU device and drop > DMA-related hacks from the VirtIO-GPU driver. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/gpu/drm/virtio/virtgpu_drv.c | 22 +++++++--- > drivers/gpu/drm/virtio/virtgpu_drv.h | 5 +-- > drivers/gpu/drm/virtio/virtgpu_kms.c | 7 ++-- > drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++-------------------- > drivers/gpu/drm/virtio/virtgpu_vq.c | 13 +++--- > 5 files changed, 37 insertions(+), 66 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c > index 5f25a8d15464..8449dad3e65c 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_drv.c > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c > @@ -46,9 +46,9 @@ static int virtio_gpu_modeset = -1; > MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); > module_param_named(modeset, virtio_gpu_modeset, int, 0400); > > -static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev) > +static int virtio_gpu_pci_quirk(struct drm_device *dev) > { > - struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); > + struct pci_dev *pdev = to_pci_dev(dev->dev); > const char *pname = dev_name(&pdev->dev); > bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; > char unique[20]; > @@ -101,6 +101,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd > static int virtio_gpu_probe(struct virtio_device *vdev) > { > struct drm_device *dev; > + struct device *dma_dev; > int ret; > > if (drm_firmware_drivers_only() && virtio_gpu_modeset == -1) > @@ -109,18 +110,29 @@ static int virtio_gpu_probe(struct virtio_device *vdev) > if (virtio_gpu_modeset == 0) > return -EINVAL; > > - dev = drm_dev_alloc(&driver, &vdev->dev); > + /* > + * If GPU's parent is a PCI device, then we will use this PCI device > + * for the DRM's driver device because GPU won't have PCI's IOMMU DMA > + * ops in this case since GPU device is sitting on a separate (from PCI) > + * virtio-bus. > + */ > + if (!strcmp(vdev->dev.parent->bus->name, "pci")) Nit: dev_is_pci() ? However, what about other VirtIO transports? Wouldn't virtio-mmio with F_ACCESS_PLATFORM be in a similar situation? Robin. > + dma_dev = vdev->dev.parent; > + else > + dma_dev = &vdev->dev; > + > + dev = drm_dev_alloc(&driver, dma_dev); > if (IS_ERR(dev)) > return PTR_ERR(dev); > vdev->priv = dev; > > if (!strcmp(vdev->dev.parent->bus->name, "pci")) { > - ret = virtio_gpu_pci_quirk(dev, vdev); > + ret = virtio_gpu_pci_quirk(dev); > if (ret) > goto err_free; > } > > - ret = virtio_gpu_init(dev); > + ret = virtio_gpu_init(vdev, dev); > if (ret) > goto err_free; > > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h > index 0a194aaad419..b2d93cb12ebf 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h > @@ -100,8 +100,6 @@ struct virtio_gpu_object { > > struct virtio_gpu_object_shmem { > struct virtio_gpu_object base; > - struct sg_table *pages; > - uint32_t mapped; > }; > > struct virtio_gpu_object_vram { > @@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache { > }; > > struct virtio_gpu_device { > - struct device *dev; > struct drm_device *ddev; > > struct virtio_device *vdev; > @@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; > void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); > > /* virtgpu_kms.c */ > -int virtio_gpu_init(struct drm_device *dev); > +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev); > void virtio_gpu_deinit(struct drm_device *dev); > void virtio_gpu_release(struct drm_device *dev); > int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); > diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c > index 3313b92db531..0d1e3eb61bee 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_kms.c > +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c > @@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, > vgdev->num_capsets = num_capsets; > } > > -int virtio_gpu_init(struct drm_device *dev) > +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) > { > static vq_callback_t *callbacks[] = { > virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack > @@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev) > u32 num_scanouts, num_capsets; > int ret = 0; > > - if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1)) > + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) > return -ENODEV; > > vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL); > @@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev) > > vgdev->ddev = dev; > dev->dev_private = vgdev; > - vgdev->vdev = dev_to_virtio(dev->dev); > - vgdev->dev = dev->dev; > + vgdev->vdev = vdev; > > spin_lock_init(&vgdev->display_info_lock); > spin_lock_init(&vgdev->resource_export_lock); > diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c > index 0b8cbb87f8d8..1964c0d8b51f 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_object.c > +++ b/drivers/gpu/drm/virtio/virtgpu_object.c > @@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) > > virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); > if (virtio_gpu_is_shmem(bo)) { > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > - > - if (shmem->pages) { > - if (shmem->mapped) { > - dma_unmap_sgtable(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE, 0); > - shmem->mapped = 0; > - } > - > - sg_free_table(shmem->pages); > - kfree(shmem->pages); > - shmem->pages = NULL; > - drm_gem_shmem_unpin(&bo->base); > - } > - > drm_gem_shmem_free(&bo->base); > } else if (virtio_gpu_is_vram(bo)) { > struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); > @@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, > unsigned int *nents) > { > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > struct scatterlist *sg; > - int si, ret; > + struct sg_table *pages; > + int si; > > - ret = drm_gem_shmem_pin(&bo->base); > - if (ret < 0) > - return -EINVAL; > - > - /* > - * virtio_gpu uses drm_gem_shmem_get_sg_table instead of > - * drm_gem_shmem_get_pages_sgt because virtio has it's own set of > - * dma-ops. This is discouraged for other drivers, but should be fine > - * since virtio_gpu doesn't support dma-buf import from other devices. > - */ > - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); > - ret = PTR_ERR(shmem->pages); > - if (ret) { > - drm_gem_shmem_unpin(&bo->base); > - shmem->pages = NULL; > - return ret; > - } > + pages = drm_gem_shmem_get_pages_sgt(&bo->base); > + if (IS_ERR(pages)) > + return PTR_ERR(pages); > > - if (use_dma_api) { > - ret = dma_map_sgtable(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE, 0); > - if (ret) > - return ret; > - *nents = shmem->mapped = shmem->pages->nents; > - } else { > - *nents = shmem->pages->orig_nents; > - } > + if (use_dma_api) > + *nents = pages->nents; > + else > + *nents = pages->orig_nents; > > *ents = kvmalloc_array(*nents, > sizeof(struct virtio_gpu_mem_entry), > @@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, > } > > if (use_dma_api) { > - for_each_sgtable_dma_sg(shmem->pages, sg, si) { > + for_each_sgtable_dma_sg(pages, sg, si) { > (*ents)[si].addr = cpu_to_le64(sg_dma_address(sg)); > (*ents)[si].length = cpu_to_le32(sg_dma_len(sg)); > (*ents)[si].padding = 0; > } > } else { > - for_each_sgtable_sg(shmem->pages, sg, si) { > + for_each_sgtable_sg(pages, sg, si) { > (*ents)[si].addr = cpu_to_le64(sg_phys(sg)); > (*ents)[si].length = cpu_to_le32(sg->length); > (*ents)[si].padding = 0; > diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c > index 2edf31806b74..06566e44307d 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_vq.c > +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c > @@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, > struct virtio_gpu_transfer_to_host_2d *cmd_p; > struct virtio_gpu_vbuffer *vbuf; > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > > if (virtio_gpu_is_shmem(bo) && use_dma_api) > - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE); > + dma_sync_sgtable_for_device(&vgdev->vdev->dev, > + bo->base.sgt, DMA_TO_DEVICE); > > cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); > memset(cmd_p, 0, sizeof(*cmd_p)); > @@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, > struct virtio_gpu_vbuffer *vbuf; > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > > - if (virtio_gpu_is_shmem(bo) && use_dma_api) { > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE); > - } > + if (virtio_gpu_is_shmem(bo) && use_dma_api) > + dma_sync_sgtable_for_device(&vgdev->vdev->dev, > + bo->base.sgt, DMA_TO_DEVICE); > > cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); > memset(cmd_p, 0, sizeof(*cmd_p)); _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20220314224253.236359-7-dmitry.osipenko@collabora.com>]
* Re: [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker [not found] ` <20220314224253.236359-7-dmitry.osipenko@collabora.com> @ 2022-03-16 20:00 ` Rob Clark [not found] ` <be3b09ff-08ea-3e13-7d8c-06af6fffbd8f@collabora.com> 2022-03-17 17:32 ` Daniel Vetter 1 sibling, 1 reply; 6+ messages in thread From: Rob Clark @ 2022-03-16 20:00 UTC (permalink / raw) To: Dmitry Osipenko Cc: Rob Herring, Gert Wollny, Tomeu Vizoso, Gustavo Padovan, David Airlie, Thomas Zimmermann, dri-devel, Maarten Lankhorst, Linux Kernel Mailing List, Maxime Ripard, Gurchetan Singh, Daniel Vetter, Dmitry Osipenko, Steven Price, open list:VIRTIO GPU DRIVER, Chia-I Wu, Alyssa Rosenzweig On Mon, Mar 14, 2022 at 3:44 PM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > > Introduce a common DRM SHMEM shrinker. It allows to reduce code > duplication among DRM drivers, it also handles complicated lockings > for the drivers. This is initial version of the shrinker that covers > basic needs of GPU drivers. > > This patch is based on a couple ideas borrowed from Rob's Clark MSM > shrinker and Thomas' Zimmermann variant of SHMEM shrinker. > > GPU drivers that want to use generic DRM memory shrinker must support > generic GEM reservations. > > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 194 +++++++++++++++++++++++++ > include/drm/drm_device.h | 4 + > include/drm/drm_gem.h | 11 ++ > include/drm/drm_gem_shmem_helper.h | 25 ++++ > 4 files changed, 234 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index 37009418cd28..35be2ee98f11 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -139,6 +139,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > { > struct drm_gem_object *obj = &shmem->base; > > + /* take out shmem GEM object from the memory shrinker */ > + drm_gem_shmem_madvise(shmem, 0); > + > WARN_ON(shmem->vmap_use_count); > > if (obj->import_attach) { > @@ -163,6 +166,42 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_free); > > +static void drm_gem_shmem_update_purgeable_status(struct drm_gem_shmem_object *shmem) > +{ > + struct drm_gem_object *obj = &shmem->base; > + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; > + size_t page_count = obj->size >> PAGE_SHIFT; > + > + if (!gem_shrinker || obj->import_attach || !obj->funcs->purge) > + return; > + > + mutex_lock(&shmem->vmap_lock); > + mutex_lock(&shmem->pages_lock); > + mutex_lock(&gem_shrinker->lock); > + > + if (shmem->madv < 0) { > + list_del_init(&shmem->madv_list); > + goto unlock; > + } else if (shmem->madv > 0) { > + if (!list_empty(&shmem->madv_list)) > + goto unlock; > + > + WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count); > + gem_shrinker->shrinkable_count += page_count; > + > + list_add_tail(&shmem->madv_list, &gem_shrinker->lru); > + } else if (!list_empty(&shmem->madv_list)) { > + list_del_init(&shmem->madv_list); > + > + WARN_ON(gem_shrinker->shrinkable_count < page_count); > + gem_shrinker->shrinkable_count -= page_count; > + } > +unlock: > + mutex_unlock(&gem_shrinker->lock); > + mutex_unlock(&shmem->pages_lock); > + mutex_unlock(&shmem->vmap_lock); > +} > + > static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > { > struct drm_gem_object *obj = &shmem->base; > @@ -366,6 +405,8 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, > ret = drm_gem_shmem_vmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > > + drm_gem_shmem_update_purgeable_status(shmem); > + > return ret; > } > EXPORT_SYMBOL(drm_gem_shmem_vmap); > @@ -409,6 +450,8 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, > mutex_lock(&shmem->vmap_lock); > drm_gem_shmem_vunmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > + > + drm_gem_shmem_update_purgeable_status(shmem); > } > EXPORT_SYMBOL(drm_gem_shmem_vunmap); > > @@ -451,6 +494,8 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) > > mutex_unlock(&shmem->pages_lock); > > + drm_gem_shmem_update_purgeable_status(shmem); > + > return (madv >= 0); > } > EXPORT_SYMBOL(drm_gem_shmem_madvise); > @@ -763,6 +808,155 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); > > +static struct drm_gem_shmem_shrinker * > +to_drm_shrinker(struct shrinker *shrinker) > +{ > + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); > +} > + > +static unsigned long > +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, > + struct shrink_control *sc) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > + u64 count = gem_shrinker->shrinkable_count; > + > + if (count >= SHRINK_EMPTY) > + return SHRINK_EMPTY - 1; > + > + return count ?: SHRINK_EMPTY; > +} > + > +static unsigned long > +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, > + struct shrink_control *sc) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > + struct drm_gem_shmem_object *shmem; > + struct list_head still_in_list; > + bool lock_contention = true; > + struct drm_gem_object *obj; > + unsigned long freed = 0; > + > + INIT_LIST_HEAD(&still_in_list); > + > + mutex_lock(&gem_shrinker->lock); > + > + while (freed < sc->nr_to_scan) { > + shmem = list_first_entry_or_null(&gem_shrinker->lru, > + typeof(*shmem), madv_list); > + if (!shmem) > + break; > + > + obj = &shmem->base; > + list_move_tail(&shmem->madv_list, &still_in_list); > + > + /* > + * If it's in the process of being freed, gem_object->free() > + * may be blocked on lock waiting to remove it. So just > + * skip it. > + */ > + if (!kref_get_unless_zero(&obj->refcount)) > + continue; > + > + mutex_unlock(&gem_shrinker->lock); > + > + /* prevent racing with job submission code paths */ > + if (!dma_resv_trylock(obj->resv)) > + goto shrinker_lock; jfwiw, the trylock here is in the msm code isn't so much for madvise (it is an error to submit jobs that reference DONTNEED objects), but instead for the case of evicting WILLNEED but inactive objects to swap. Ie. in the case that we need to move bo's back in to memory, we don't want to unpin/evict a buffer that is later on the list for the same job.. msm shrinker re-uses the same scan loop for both inactive_dontneed (purge) and inactive_willneed (evict) I suppose using trylock is not technically wrong, and it would be a good idea if the shmem helpers supported eviction as well. But I think in the madvise/purge case if you lose the trylock then there is something else bad going on. Anyways, from the PoV of minimizing lock contention when under memory pressure, this all looks good to me. BR, -R > + > + /* prevent racing with the dma-buf exporting */ > + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) > + goto resv_unlock; > + > + if (!mutex_trylock(&shmem->vmap_lock)) > + goto object_name_unlock; > + > + if (!mutex_trylock(&shmem->pages_lock)) > + goto vmap_unlock; > + > + lock_contention = false; > + > + /* check whether h/w uses this object */ > + if (!dma_resv_test_signaled(obj->resv, true)) > + goto pages_unlock; > + > + /* GEM may've become unpurgeable while shrinker was unlocked */ > + if (!drm_gem_shmem_is_purgeable(shmem)) > + goto pages_unlock; > + > + freed += obj->funcs->purge(obj); > +pages_unlock: > + mutex_unlock(&shmem->pages_lock); > +vmap_unlock: > + mutex_unlock(&shmem->vmap_lock); > +object_name_unlock: > + mutex_unlock(&gem_shrinker->dev->object_name_lock); > +resv_unlock: > + dma_resv_unlock(obj->resv); > +shrinker_lock: > + drm_gem_object_put(&shmem->base); > + mutex_lock(&gem_shrinker->lock); > + } > + > + list_splice_tail(&still_in_list, &gem_shrinker->lru); > + WARN_ON(gem_shrinker->shrinkable_count < freed); > + gem_shrinker->shrinkable_count -= freed; > + > + mutex_unlock(&gem_shrinker->lock); > + > + if (!freed && !lock_contention) > + return SHRINK_STOP; > + > + return freed; > +} > + > +int drm_gem_shmem_shrinker_register(struct drm_device *dev) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker; > + int err; > + > + if (WARN_ON(dev->shmem_shrinker)) > + return -EBUSY; > + > + gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL); > + if (!gem_shrinker) > + return -ENOMEM; > + > + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; > + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; > + gem_shrinker->base.seeks = DEFAULT_SEEKS; > + gem_shrinker->dev = dev; > + > + INIT_LIST_HEAD(&gem_shrinker->lru); > + mutex_init(&gem_shrinker->lock); > + > + dev->shmem_shrinker = gem_shrinker; > + > + err = register_shrinker(&gem_shrinker->base); > + if (err) { > + dev->shmem_shrinker = NULL; > + kfree(gem_shrinker); > + return err; > + } > + > + return 0; > +} > +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register); > + > +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker; > + > + if (gem_shrinker) { > + unregister_shrinker(&gem_shrinker->base); > + mutex_destroy(&gem_shrinker->lock); > + dev->shmem_shrinker = NULL; > + kfree(gem_shrinker); > + } > +} > +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister); > + > MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); > MODULE_IMPORT_NS(DMA_BUF); > MODULE_LICENSE("GPL v2"); > diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h > index 9923c7a6885e..929546cad894 100644 > --- a/include/drm/drm_device.h > +++ b/include/drm/drm_device.h > @@ -16,6 +16,7 @@ struct drm_vblank_crtc; > struct drm_vma_offset_manager; > struct drm_vram_mm; > struct drm_fb_helper; > +struct drm_gem_shmem_shrinker; > > struct inode; > > @@ -277,6 +278,9 @@ struct drm_device { > /** @vram_mm: VRAM MM memory manager */ > struct drm_vram_mm *vram_mm; > > + /** @shmem_shrinker: SHMEM GEM memory shrinker */ > + struct drm_gem_shmem_shrinker *shmem_shrinker; > + > /** > * @switch_power_state: > * > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index e2941cee14b6..cdb99cfbf0bc 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -172,6 +172,17 @@ struct drm_gem_object_funcs { > * This is optional but necessary for mmap support. > */ > const struct vm_operations_struct *vm_ops; > + > + /** > + * @purge: > + * > + * Releases the GEM object's allocated backing storage to the system. > + * > + * Returns the number of pages that have been freed by purging the GEM object. > + * > + * This callback is used by the GEM shrinker. > + */ > + unsigned long (*purge)(struct drm_gem_object *obj); > }; > > /** > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h > index d0a57853c188..455254f131f6 100644 > --- a/include/drm/drm_gem_shmem_helper.h > +++ b/include/drm/drm_gem_shmem_helper.h > @@ -6,6 +6,7 @@ > #include <linux/fs.h> > #include <linux/mm.h> > #include <linux/mutex.h> > +#include <linux/shrinker.h> > > #include <drm/drm_file.h> > #include <drm/drm_gem.h> > @@ -15,6 +16,7 @@ > struct dma_buf_attachment; > struct drm_mode_create_dumb; > struct drm_printer; > +struct drm_device; > struct sg_table; > > /** > @@ -272,6 +274,29 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v > return drm_gem_shmem_mmap(shmem, vma); > } > > +/** > + * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs > + */ > +struct drm_gem_shmem_shrinker { > + /** @base: Shrinker for purging shmem GEM objects */ > + struct shrinker base; > + > + /** @lock: Protects @lru */ > + struct mutex lock; > + > + /** @lru: List of shmem GEM objects available for purging */ > + struct list_head lru; > + > + /** @dev: DRM device that uses this shrinker */ > + struct drm_device *dev; > + > + /** @shrinkable_count: Count of shmem GEM pages to be purged */ > + u64 shrinkable_count; > +}; > + > +int drm_gem_shmem_shrinker_register(struct drm_device *dev); > +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev); > + > /* > * Driver ops > */ > -- > 2.35.1 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <be3b09ff-08ea-3e13-7d8c-06af6fffbd8f@collabora.com>]
* Re: [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker [not found] ` <be3b09ff-08ea-3e13-7d8c-06af6fffbd8f@collabora.com> @ 2022-03-17 16:13 ` Rob Clark 0 siblings, 0 replies; 6+ messages in thread From: Rob Clark @ 2022-03-17 16:13 UTC (permalink / raw) To: Dmitry Osipenko Cc: Rob Herring, Gert Wollny, Tomeu Vizoso, Gustavo Padovan, David Airlie, Thomas Zimmermann, dri-devel, Maarten Lankhorst, Linux Kernel Mailing List, Maxime Ripard, Gurchetan Singh, Daniel Vetter, Dmitry Osipenko, Steven Price, open list:VIRTIO GPU DRIVER, Chia-I Wu, Alyssa Rosenzweig On Wed, Mar 16, 2022 at 5:13 PM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > > On 3/16/22 23:00, Rob Clark wrote: > > On Mon, Mar 14, 2022 at 3:44 PM Dmitry Osipenko > > <dmitry.osipenko@collabora.com> wrote: > >> > >> Introduce a common DRM SHMEM shrinker. It allows to reduce code > >> duplication among DRM drivers, it also handles complicated lockings > >> for the drivers. This is initial version of the shrinker that covers > >> basic needs of GPU drivers. > >> > >> This patch is based on a couple ideas borrowed from Rob's Clark MSM > >> shrinker and Thomas' Zimmermann variant of SHMEM shrinker. > >> > >> GPU drivers that want to use generic DRM memory shrinker must support > >> generic GEM reservations. > >> > >> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com> > >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > >> --- > >> drivers/gpu/drm/drm_gem_shmem_helper.c | 194 +++++++++++++++++++++++++ > >> include/drm/drm_device.h | 4 + > >> include/drm/drm_gem.h | 11 ++ > >> include/drm/drm_gem_shmem_helper.h | 25 ++++ > >> 4 files changed, 234 insertions(+) > >> > >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > >> index 37009418cd28..35be2ee98f11 100644 > >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > >> @@ -139,6 +139,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > >> { > >> struct drm_gem_object *obj = &shmem->base; > >> > >> + /* take out shmem GEM object from the memory shrinker */ > >> + drm_gem_shmem_madvise(shmem, 0); > >> + > >> WARN_ON(shmem->vmap_use_count); > >> > >> if (obj->import_attach) { > >> @@ -163,6 +166,42 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > >> } > >> EXPORT_SYMBOL_GPL(drm_gem_shmem_free); > >> > >> +static void drm_gem_shmem_update_purgeable_status(struct drm_gem_shmem_object *shmem) > >> +{ > >> + struct drm_gem_object *obj = &shmem->base; > >> + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; > >> + size_t page_count = obj->size >> PAGE_SHIFT; > >> + > >> + if (!gem_shrinker || obj->import_attach || !obj->funcs->purge) > >> + return; > >> + > >> + mutex_lock(&shmem->vmap_lock); > >> + mutex_lock(&shmem->pages_lock); > >> + mutex_lock(&gem_shrinker->lock); > >> + > >> + if (shmem->madv < 0) { > >> + list_del_init(&shmem->madv_list); > >> + goto unlock; > >> + } else if (shmem->madv > 0) { > >> + if (!list_empty(&shmem->madv_list)) > >> + goto unlock; > >> + > >> + WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count); > >> + gem_shrinker->shrinkable_count += page_count; > >> + > >> + list_add_tail(&shmem->madv_list, &gem_shrinker->lru); > >> + } else if (!list_empty(&shmem->madv_list)) { > >> + list_del_init(&shmem->madv_list); > >> + > >> + WARN_ON(gem_shrinker->shrinkable_count < page_count); > >> + gem_shrinker->shrinkable_count -= page_count; > >> + } > >> +unlock: > >> + mutex_unlock(&gem_shrinker->lock); > >> + mutex_unlock(&shmem->pages_lock); > >> + mutex_unlock(&shmem->vmap_lock); > >> +} > >> + > >> static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > >> { > >> struct drm_gem_object *obj = &shmem->base; > >> @@ -366,6 +405,8 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, > >> ret = drm_gem_shmem_vmap_locked(shmem, map); > >> mutex_unlock(&shmem->vmap_lock); > >> > >> + drm_gem_shmem_update_purgeable_status(shmem); > >> + > >> return ret; > >> } > >> EXPORT_SYMBOL(drm_gem_shmem_vmap); > >> @@ -409,6 +450,8 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, > >> mutex_lock(&shmem->vmap_lock); > >> drm_gem_shmem_vunmap_locked(shmem, map); > >> mutex_unlock(&shmem->vmap_lock); > >> + > >> + drm_gem_shmem_update_purgeable_status(shmem); > >> } > >> EXPORT_SYMBOL(drm_gem_shmem_vunmap); > >> > >> @@ -451,6 +494,8 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) > >> > >> mutex_unlock(&shmem->pages_lock); > >> > >> + drm_gem_shmem_update_purgeable_status(shmem); > >> + > >> return (madv >= 0); > >> } > >> EXPORT_SYMBOL(drm_gem_shmem_madvise); > >> @@ -763,6 +808,155 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, > >> } > >> EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); > >> > >> +static struct drm_gem_shmem_shrinker * > >> +to_drm_shrinker(struct shrinker *shrinker) > >> +{ > >> + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); > >> +} > >> + > >> +static unsigned long > >> +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, > >> + struct shrink_control *sc) > >> +{ > >> + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > >> + u64 count = gem_shrinker->shrinkable_count; > >> + > >> + if (count >= SHRINK_EMPTY) > >> + return SHRINK_EMPTY - 1; > >> + > >> + return count ?: SHRINK_EMPTY; > >> +} > >> + > >> +static unsigned long > >> +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, > >> + struct shrink_control *sc) > >> +{ > >> + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > >> + struct drm_gem_shmem_object *shmem; > >> + struct list_head still_in_list; > >> + bool lock_contention = true; > >> + struct drm_gem_object *obj; > >> + unsigned long freed = 0; > >> + > >> + INIT_LIST_HEAD(&still_in_list); > >> + > >> + mutex_lock(&gem_shrinker->lock); > >> + > >> + while (freed < sc->nr_to_scan) { > >> + shmem = list_first_entry_or_null(&gem_shrinker->lru, > >> + typeof(*shmem), madv_list); > >> + if (!shmem) > >> + break; > >> + > >> + obj = &shmem->base; > >> + list_move_tail(&shmem->madv_list, &still_in_list); > >> + > >> + /* > >> + * If it's in the process of being freed, gem_object->free() > >> + * may be blocked on lock waiting to remove it. So just > >> + * skip it. > >> + */ > >> + if (!kref_get_unless_zero(&obj->refcount)) > >> + continue; > >> + > >> + mutex_unlock(&gem_shrinker->lock); > >> + > >> + /* prevent racing with job submission code paths */ > >> + if (!dma_resv_trylock(obj->resv)) > >> + goto shrinker_lock; > > > > jfwiw, the trylock here is in the msm code isn't so much for madvise > > (it is an error to submit jobs that reference DONTNEED objects), but > > instead for the case of evicting WILLNEED but inactive objects to > > swap. Ie. in the case that we need to move bo's back in to memory, we > > don't want to unpin/evict a buffer that is later on the list for the > > same job.. msm shrinker re-uses the same scan loop for both > > inactive_dontneed (purge) and inactive_willneed (evict) > > I don't see connection between the objects on the shrinker's list and > the job's BOs. Jobs indeed must not have any objects marked as DONTNEED, > this case should never happen in practice, but we still need to protect > from it. Hmm, let me try to explain with a simple example.. hopefully this makes sense. Say you have a job with two bo's, A and B.. bo A is not backed with memory (either hasn't been used before or was evicted. Allocating pages for A triggers shrinker. But B is still on the inactive_willneed list, however it is already locked (because we don't want to evict B to obtain backing pages for A). > > > I suppose using trylock is not technically wrong, and it would be a > > good idea if the shmem helpers supported eviction as well. But I > > think in the madvise/purge case if you lose the trylock then there is > > something else bad going on. > > This trylock is intended for protecting job's submission path from > racing with madvise ioctl invocation followed by immediate purging of > BOs while job is in a process of submission, i.e. it protects from a > use-after-free. ahh, ok > If you'll lose this trylock, then shrinker can't use > dma_resv_test_signaled() reliably anymore and shrinker may purge BO > before job had a chance to add fence to the BO's reservation. > > > Anyways, from the PoV of minimizing lock contention when under memory > > pressure, this all looks good to me. > > Thank you. I may try to add generic eviction support to the v3. eviction is a trickier thing to get right, I wouldn't blame you for splitting that out into it's own patchset ;-) You probably also would want to make it a thing that is opt-in for drivers using the shmem helpers BR, -R _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker [not found] ` <20220314224253.236359-7-dmitry.osipenko@collabora.com> 2022-03-16 20:00 ` [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker Rob Clark @ 2022-03-17 17:32 ` Daniel Vetter 1 sibling, 0 replies; 6+ messages in thread From: Daniel Vetter @ 2022-03-17 17:32 UTC (permalink / raw) To: Dmitry Osipenko Cc: Rob Herring, Gert Wollny, Tomeu Vizoso, Gustavo Padovan, David Airlie, Thomas Zimmermann, dri-devel, Daniel Stone, Maarten Lankhorst, linux-kernel, Maxime Ripard, Gurchetan Singh, Daniel Vetter, Dmitry Osipenko, Steven Price, virtualization, Chia-I Wu, Alyssa Rosenzweig On Tue, Mar 15, 2022 at 01:42:51AM +0300, Dmitry Osipenko wrote: > Introduce a common DRM SHMEM shrinker. It allows to reduce code > duplication among DRM drivers, it also handles complicated lockings > for the drivers. This is initial version of the shrinker that covers > basic needs of GPU drivers. > > This patch is based on a couple ideas borrowed from Rob's Clark MSM > shrinker and Thomas' Zimmermann variant of SHMEM shrinker. > > GPU drivers that want to use generic DRM memory shrinker must support > generic GEM reservations. > > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 194 +++++++++++++++++++++++++ > include/drm/drm_device.h | 4 + > include/drm/drm_gem.h | 11 ++ > include/drm/drm_gem_shmem_helper.h | 25 ++++ > 4 files changed, 234 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index 37009418cd28..35be2ee98f11 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -139,6 +139,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > { > struct drm_gem_object *obj = &shmem->base; > > + /* take out shmem GEM object from the memory shrinker */ > + drm_gem_shmem_madvise(shmem, 0); > + > WARN_ON(shmem->vmap_use_count); > > if (obj->import_attach) { > @@ -163,6 +166,42 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_free); > > +static void drm_gem_shmem_update_purgeable_status(struct drm_gem_shmem_object *shmem) > +{ > + struct drm_gem_object *obj = &shmem->base; > + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; > + size_t page_count = obj->size >> PAGE_SHIFT; > + > + if (!gem_shrinker || obj->import_attach || !obj->funcs->purge) > + return; > + > + mutex_lock(&shmem->vmap_lock); > + mutex_lock(&shmem->pages_lock); > + mutex_lock(&gem_shrinker->lock); Uh this is just terrible I think. Can't we move shmem helpers over to reasonable locking, i.e. per-object dma_resv_lock for everything? I know it's a pile of work, but I think we're way past the point with things like this popping up where we should just bite that bullet. I discussed the full thing with Daniel Stone, but maybe a joint refresher on irc would be a good thing. -Daniel > + > + if (shmem->madv < 0) { > + list_del_init(&shmem->madv_list); > + goto unlock; > + } else if (shmem->madv > 0) { > + if (!list_empty(&shmem->madv_list)) > + goto unlock; > + > + WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count); > + gem_shrinker->shrinkable_count += page_count; > + > + list_add_tail(&shmem->madv_list, &gem_shrinker->lru); > + } else if (!list_empty(&shmem->madv_list)) { > + list_del_init(&shmem->madv_list); > + > + WARN_ON(gem_shrinker->shrinkable_count < page_count); > + gem_shrinker->shrinkable_count -= page_count; > + } > +unlock: > + mutex_unlock(&gem_shrinker->lock); > + mutex_unlock(&shmem->pages_lock); > + mutex_unlock(&shmem->vmap_lock); > +} > + > static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > { > struct drm_gem_object *obj = &shmem->base; > @@ -366,6 +405,8 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, > ret = drm_gem_shmem_vmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > > + drm_gem_shmem_update_purgeable_status(shmem); > + > return ret; > } > EXPORT_SYMBOL(drm_gem_shmem_vmap); > @@ -409,6 +450,8 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, > mutex_lock(&shmem->vmap_lock); > drm_gem_shmem_vunmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > + > + drm_gem_shmem_update_purgeable_status(shmem); > } > EXPORT_SYMBOL(drm_gem_shmem_vunmap); > > @@ -451,6 +494,8 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) > > mutex_unlock(&shmem->pages_lock); > > + drm_gem_shmem_update_purgeable_status(shmem); > + > return (madv >= 0); > } > EXPORT_SYMBOL(drm_gem_shmem_madvise); > @@ -763,6 +808,155 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); > > +static struct drm_gem_shmem_shrinker * > +to_drm_shrinker(struct shrinker *shrinker) > +{ > + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); > +} > + > +static unsigned long > +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, > + struct shrink_control *sc) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > + u64 count = gem_shrinker->shrinkable_count; > + > + if (count >= SHRINK_EMPTY) > + return SHRINK_EMPTY - 1; > + > + return count ?: SHRINK_EMPTY; > +} > + > +static unsigned long > +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, > + struct shrink_control *sc) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); > + struct drm_gem_shmem_object *shmem; > + struct list_head still_in_list; > + bool lock_contention = true; > + struct drm_gem_object *obj; > + unsigned long freed = 0; > + > + INIT_LIST_HEAD(&still_in_list); > + > + mutex_lock(&gem_shrinker->lock); > + > + while (freed < sc->nr_to_scan) { > + shmem = list_first_entry_or_null(&gem_shrinker->lru, > + typeof(*shmem), madv_list); > + if (!shmem) > + break; > + > + obj = &shmem->base; > + list_move_tail(&shmem->madv_list, &still_in_list); > + > + /* > + * If it's in the process of being freed, gem_object->free() > + * may be blocked on lock waiting to remove it. So just > + * skip it. > + */ > + if (!kref_get_unless_zero(&obj->refcount)) > + continue; > + > + mutex_unlock(&gem_shrinker->lock); > + > + /* prevent racing with job submission code paths */ > + if (!dma_resv_trylock(obj->resv)) > + goto shrinker_lock; > + > + /* prevent racing with the dma-buf exporting */ > + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) > + goto resv_unlock; > + > + if (!mutex_trylock(&shmem->vmap_lock)) > + goto object_name_unlock; > + > + if (!mutex_trylock(&shmem->pages_lock)) > + goto vmap_unlock; > + > + lock_contention = false; > + > + /* check whether h/w uses this object */ > + if (!dma_resv_test_signaled(obj->resv, true)) > + goto pages_unlock; > + > + /* GEM may've become unpurgeable while shrinker was unlocked */ > + if (!drm_gem_shmem_is_purgeable(shmem)) > + goto pages_unlock; > + > + freed += obj->funcs->purge(obj); > +pages_unlock: > + mutex_unlock(&shmem->pages_lock); > +vmap_unlock: > + mutex_unlock(&shmem->vmap_lock); > +object_name_unlock: > + mutex_unlock(&gem_shrinker->dev->object_name_lock); > +resv_unlock: > + dma_resv_unlock(obj->resv); > +shrinker_lock: > + drm_gem_object_put(&shmem->base); > + mutex_lock(&gem_shrinker->lock); > + } > + > + list_splice_tail(&still_in_list, &gem_shrinker->lru); > + WARN_ON(gem_shrinker->shrinkable_count < freed); > + gem_shrinker->shrinkable_count -= freed; > + > + mutex_unlock(&gem_shrinker->lock); > + > + if (!freed && !lock_contention) > + return SHRINK_STOP; > + > + return freed; > +} > + > +int drm_gem_shmem_shrinker_register(struct drm_device *dev) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker; > + int err; > + > + if (WARN_ON(dev->shmem_shrinker)) > + return -EBUSY; > + > + gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL); > + if (!gem_shrinker) > + return -ENOMEM; > + > + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; > + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; > + gem_shrinker->base.seeks = DEFAULT_SEEKS; > + gem_shrinker->dev = dev; > + > + INIT_LIST_HEAD(&gem_shrinker->lru); > + mutex_init(&gem_shrinker->lock); > + > + dev->shmem_shrinker = gem_shrinker; > + > + err = register_shrinker(&gem_shrinker->base); > + if (err) { > + dev->shmem_shrinker = NULL; > + kfree(gem_shrinker); > + return err; > + } > + > + return 0; > +} > +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register); > + > +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev) > +{ > + struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker; > + > + if (gem_shrinker) { > + unregister_shrinker(&gem_shrinker->base); > + mutex_destroy(&gem_shrinker->lock); > + dev->shmem_shrinker = NULL; > + kfree(gem_shrinker); > + } > +} > +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister); > + > MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); > MODULE_IMPORT_NS(DMA_BUF); > MODULE_LICENSE("GPL v2"); > diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h > index 9923c7a6885e..929546cad894 100644 > --- a/include/drm/drm_device.h > +++ b/include/drm/drm_device.h > @@ -16,6 +16,7 @@ struct drm_vblank_crtc; > struct drm_vma_offset_manager; > struct drm_vram_mm; > struct drm_fb_helper; > +struct drm_gem_shmem_shrinker; > > struct inode; > > @@ -277,6 +278,9 @@ struct drm_device { > /** @vram_mm: VRAM MM memory manager */ > struct drm_vram_mm *vram_mm; > > + /** @shmem_shrinker: SHMEM GEM memory shrinker */ > + struct drm_gem_shmem_shrinker *shmem_shrinker; > + > /** > * @switch_power_state: > * > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index e2941cee14b6..cdb99cfbf0bc 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -172,6 +172,17 @@ struct drm_gem_object_funcs { > * This is optional but necessary for mmap support. > */ > const struct vm_operations_struct *vm_ops; > + > + /** > + * @purge: > + * > + * Releases the GEM object's allocated backing storage to the system. > + * > + * Returns the number of pages that have been freed by purging the GEM object. > + * > + * This callback is used by the GEM shrinker. > + */ > + unsigned long (*purge)(struct drm_gem_object *obj); > }; > > /** > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h > index d0a57853c188..455254f131f6 100644 > --- a/include/drm/drm_gem_shmem_helper.h > +++ b/include/drm/drm_gem_shmem_helper.h > @@ -6,6 +6,7 @@ > #include <linux/fs.h> > #include <linux/mm.h> > #include <linux/mutex.h> > +#include <linux/shrinker.h> > > #include <drm/drm_file.h> > #include <drm/drm_gem.h> > @@ -15,6 +16,7 @@ > struct dma_buf_attachment; > struct drm_mode_create_dumb; > struct drm_printer; > +struct drm_device; > struct sg_table; > > /** > @@ -272,6 +274,29 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v > return drm_gem_shmem_mmap(shmem, vma); > } > > +/** > + * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs > + */ > +struct drm_gem_shmem_shrinker { > + /** @base: Shrinker for purging shmem GEM objects */ > + struct shrinker base; > + > + /** @lock: Protects @lru */ > + struct mutex lock; > + > + /** @lru: List of shmem GEM objects available for purging */ > + struct list_head lru; > + > + /** @dev: DRM device that uses this shrinker */ > + struct drm_device *dev; > + > + /** @shrinkable_count: Count of shmem GEM pages to be purged */ > + u64 shrinkable_count; > +}; > + > +int drm_gem_shmem_shrinker_register(struct drm_device *dev); > +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev); > + > /* > * Driver ops > */ > -- > 2.35.1 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-03-17 17:32 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220314224253.236359-1-dmitry.osipenko@collabora.com>
[not found] ` <20220314224253.236359-8-dmitry.osipenko@collabora.com>
2022-03-15 12:43 ` [PATCH v2 7/8] drm/virtio: Support memory shrinking Emil Velikov
2022-03-15 12:47 ` [PATCH v2 0/8] Add memory shrinker to VirtIO-GPU DRM driver Emil Velikov
[not found] ` <20220314224253.236359-5-dmitry.osipenko@collabora.com>
2022-03-16 12:41 ` [PATCH v2 4/8] drm/virtio: Improve DMA API usage for shmem BOs Robin Murphy
[not found] ` <20220314224253.236359-7-dmitry.osipenko@collabora.com>
2022-03-16 20:00 ` [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker Rob Clark
[not found] ` <be3b09ff-08ea-3e13-7d8c-06af6fffbd8f@collabora.com>
2022-03-17 16:13 ` Rob Clark
2022-03-17 17:32 ` Daniel Vetter
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox