From: Boris Brezillon <boris.brezillon@collabora.com>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: "Mark Rutland" <mark.rutland@arm.com>,
"Emma Anholt" <emma@anholt.net>,
"Peter Zijlstra" <peterz@infradead.org>,
dri-devel@lists.freedesktop.org,
"Gurchetan Singh" <gurchetansingh@chromium.org>,
"Gerd Hoffmann" <kraxel@redhat.com>,
kernel@collabora.com, "Will Deacon" <will@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Steven Price" <steven.price@arm.com>,
intel-gfx@lists.freedesktop.org,
"Daniel Vetter" <daniel@ffwll.ch>,
"Boqun Feng" <boqun.feng@gmail.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Melissa Wen" <mwen@igalia.com>,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, "Chia-I Wu" <olvaffe@gmail.com>,
"Qiang Yu" <yuq825@gmail.com>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"Christian König" <christian.koenig@amd.com>
Subject: Re: [Intel-gfx] [PATCH v15 17/23] drm/shmem-helper: Add and use drm_gem_shmem_resv_assert_held() helper
Date: Mon, 28 Aug 2023 12:12:39 +0200 [thread overview]
Message-ID: <20230828121239.78a180e6@collabora.com> (raw)
In-Reply-To: <20230827175449.1766701-18-dmitry.osipenko@collabora.com>
On Sun, 27 Aug 2023 20:54:43 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> In a preparation of adding drm-shmem memory shrinker, move all reservation
> locking lockdep checks to use new drm_gem_shmem_resv_assert_held() that
> will resolve spurious lockdep warning about wrong locking order vs
> fs_reclam code paths during freeing of shmem GEM, where lockdep isn't
> aware that it's impossible to have locking contention with the fs_reclam
> at this special time.
>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 37 +++++++++++++++++---------
> 1 file changed, 25 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index d96fee3d6166..ca5da976aafa 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -128,6 +128,23 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
>
> +static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
> +{
> + /*
> + * Destroying the object is a special case.. drm_gem_shmem_free()
> + * calls many things that WARN_ON if the obj lock is not held. But
> + * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
> + * order inversion between reservation_ww_class_mutex and fs_reclaim.
> + *
> + * This deadlock is not actually possible, because no one should
> + * be already holding the lock when drm_gem_shmem_free() is called.
> + * Unfortunately lockdep is not aware of this detail. So when the
> + * refcount drops to zero, we pretend it is already locked.
> + */
> + if (kref_read(&shmem->base.refcount))
> + drm_gem_shmem_resv_assert_held(shmem);
> +}
> +
> /**
> * drm_gem_shmem_free - Free resources associated with a shmem GEM object
> * @shmem: shmem GEM object to free
> @@ -142,8 +159,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> } else if (!shmem->imported_sgt) {
> - dma_resv_lock(shmem->base.resv, NULL);
> -
> drm_WARN_ON(obj->dev, kref_read(&shmem->vmap_use_count));
>
> if (shmem->sgt) {
> @@ -156,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> drm_gem_shmem_put_pages_locked(shmem);
AFAICT, drm_gem_shmem_put_pages_locked() is the only function that's
called in the free path and would complain about resv-lock not being
held. I think I'd feel more comfortable if we were adding a
drm_gem_shmem_free_pages() function that did everything
drm_gem_shmem_put_pages_locked() does except for the lock_held() check
and the refcount dec, and have it called here (and in
drm_gem_shmem_put_pages_locked()). This way we can keep using
dma_resv_assert_held() instead of having our own variant.
>
> drm_WARN_ON(obj->dev, kref_read(&shmem->pages_use_count));
> -
> - dma_resv_unlock(shmem->base.resv);
> }
>
> drm_gem_object_release(obj);
> @@ -170,7 +183,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->pages_use_count))
> return 0;
> @@ -228,7 +241,7 @@ static void drm_gem_shmem_kref_release_pages(struct kref *kref)
> */
> void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> kref_put(&shmem->pages_use_count, drm_gem_shmem_kref_release_pages);
> }
> @@ -252,7 +265,7 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> {
> int ret;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->pages_pin_count))
> return 0;
> @@ -276,7 +289,7 @@ static void drm_gem_shmem_kref_unpin_pages(struct kref *kref)
>
> static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> kref_put(&shmem->pages_pin_count, drm_gem_shmem_kref_unpin_pages);
> }
> @@ -357,7 +370,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->vmap_use_count)) {
> iosys_map_set_vaddr(map, shmem->vaddr);
> @@ -426,7 +439,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> if (obj->import_attach) {
> dma_buf_vunmap(obj->import_attach->dmabuf, map);
> } else {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
> kref_put(&shmem->vmap_use_count, drm_gem_shmem_kref_vunmap);
> }
>
> @@ -462,7 +475,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> */
> int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (shmem->madv >= 0)
> shmem->madv = madv;
> @@ -478,7 +491,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>
WARNING: multiple messages have this Message-ID (diff)
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: "Mark Rutland" <mark.rutland@arm.com>,
"Emma Anholt" <emma@anholt.net>,
"Peter Zijlstra" <peterz@infradead.org>,
dri-devel@lists.freedesktop.org,
"Gurchetan Singh" <gurchetansingh@chromium.org>,
"Gerd Hoffmann" <kraxel@redhat.com>,
kernel@collabora.com, "Will Deacon" <will@kernel.org>,
"Steven Price" <steven.price@arm.com>,
intel-gfx@lists.freedesktop.org,
"Boqun Feng" <boqun.feng@gmail.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Melissa Wen" <mwen@igalia.com>,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, "Qiang Yu" <yuq825@gmail.com>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v15 17/23] drm/shmem-helper: Add and use drm_gem_shmem_resv_assert_held() helper
Date: Mon, 28 Aug 2023 12:12:39 +0200 [thread overview]
Message-ID: <20230828121239.78a180e6@collabora.com> (raw)
In-Reply-To: <20230827175449.1766701-18-dmitry.osipenko@collabora.com>
On Sun, 27 Aug 2023 20:54:43 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> In a preparation of adding drm-shmem memory shrinker, move all reservation
> locking lockdep checks to use new drm_gem_shmem_resv_assert_held() that
> will resolve spurious lockdep warning about wrong locking order vs
> fs_reclam code paths during freeing of shmem GEM, where lockdep isn't
> aware that it's impossible to have locking contention with the fs_reclam
> at this special time.
>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 37 +++++++++++++++++---------
> 1 file changed, 25 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index d96fee3d6166..ca5da976aafa 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -128,6 +128,23 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
>
> +static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
> +{
> + /*
> + * Destroying the object is a special case.. drm_gem_shmem_free()
> + * calls many things that WARN_ON if the obj lock is not held. But
> + * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
> + * order inversion between reservation_ww_class_mutex and fs_reclaim.
> + *
> + * This deadlock is not actually possible, because no one should
> + * be already holding the lock when drm_gem_shmem_free() is called.
> + * Unfortunately lockdep is not aware of this detail. So when the
> + * refcount drops to zero, we pretend it is already locked.
> + */
> + if (kref_read(&shmem->base.refcount))
> + drm_gem_shmem_resv_assert_held(shmem);
> +}
> +
> /**
> * drm_gem_shmem_free - Free resources associated with a shmem GEM object
> * @shmem: shmem GEM object to free
> @@ -142,8 +159,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> } else if (!shmem->imported_sgt) {
> - dma_resv_lock(shmem->base.resv, NULL);
> -
> drm_WARN_ON(obj->dev, kref_read(&shmem->vmap_use_count));
>
> if (shmem->sgt) {
> @@ -156,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> drm_gem_shmem_put_pages_locked(shmem);
AFAICT, drm_gem_shmem_put_pages_locked() is the only function that's
called in the free path and would complain about resv-lock not being
held. I think I'd feel more comfortable if we were adding a
drm_gem_shmem_free_pages() function that did everything
drm_gem_shmem_put_pages_locked() does except for the lock_held() check
and the refcount dec, and have it called here (and in
drm_gem_shmem_put_pages_locked()). This way we can keep using
dma_resv_assert_held() instead of having our own variant.
>
> drm_WARN_ON(obj->dev, kref_read(&shmem->pages_use_count));
> -
> - dma_resv_unlock(shmem->base.resv);
> }
>
> drm_gem_object_release(obj);
> @@ -170,7 +183,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->pages_use_count))
> return 0;
> @@ -228,7 +241,7 @@ static void drm_gem_shmem_kref_release_pages(struct kref *kref)
> */
> void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> kref_put(&shmem->pages_use_count, drm_gem_shmem_kref_release_pages);
> }
> @@ -252,7 +265,7 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> {
> int ret;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->pages_pin_count))
> return 0;
> @@ -276,7 +289,7 @@ static void drm_gem_shmem_kref_unpin_pages(struct kref *kref)
>
> static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> kref_put(&shmem->pages_pin_count, drm_gem_shmem_kref_unpin_pages);
> }
> @@ -357,7 +370,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (kref_get_unless_zero(&shmem->vmap_use_count)) {
> iosys_map_set_vaddr(map, shmem->vaddr);
> @@ -426,7 +439,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> if (obj->import_attach) {
> dma_buf_vunmap(obj->import_attach->dmabuf, map);
> } else {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
> kref_put(&shmem->vmap_use_count, drm_gem_shmem_kref_vunmap);
> }
>
> @@ -462,7 +475,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> */
> int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (shmem->madv >= 0)
> shmem->madv = madv;
> @@ -478,7 +491,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>
next prev parent reply other threads:[~2023-08-28 10:12 UTC|newest]
Thread overview: 112+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-27 17:54 [Intel-gfx] [PATCH v15 00/23] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 01/23] drm/shmem-helper: Fix UAF in error path when freeing SGT of imported GEM Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 11:16 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:16 ` Boris Brezillon
2023-09-02 18:15 ` [Intel-gfx] " Dmitry Osipenko
2023-09-02 18:15 ` Dmitry Osipenko
2023-09-04 8:01 ` [Intel-gfx] " Boris Brezillon
2023-09-04 8:01 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 02/23] drm/shmem-helper: Use flag for tracking page count bumped by get_pages_sgt() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 10:55 ` [Intel-gfx] " Boris Brezillon
2023-08-28 10:55 ` Boris Brezillon
2023-09-02 18:28 ` [Intel-gfx] " Dmitry Osipenko
2023-09-02 18:28 ` Dmitry Osipenko
2023-09-04 7:52 ` [Intel-gfx] " Boris Brezillon
2023-09-04 7:52 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 03/23] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 11:25 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:25 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 04/23] drm/gem: Add _locked postfix to functions that have unlocked counterpart Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 11:25 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:25 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 05/23] drm/v3d: Replace open-coded drm_gem_shmem_free() with drm_gem_object_put() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 06/23] drm/virtio: Replace " Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 07/23] drm/shmem-helper: Make all exported symbols GPL Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 08/23] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 11:28 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:28 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 09/23] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 11:29 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:29 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 10/23] locking/refcount, kref: Add kref_put_ww_mutex() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 9:26 ` [Intel-gfx] " Boris Brezillon
2023-08-28 9:26 ` Boris Brezillon
2023-08-29 2:28 ` [Intel-gfx] " Dmitry Osipenko
2023-08-29 2:28 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 11/23] dma-resv: Add kref_put_dma_resv() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 10:21 ` [Intel-gfx] " Christian König
2023-08-28 10:21 ` Christian König
2023-08-28 10:21 ` Christian König via Virtualization
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 12/23] drm/shmem-helper: Add and use pages_pin_count Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 9:38 ` [Intel-gfx] " Boris Brezillon
2023-08-28 9:38 ` Boris Brezillon
2023-08-28 11:46 ` [Intel-gfx] " Boris Brezillon
2023-08-28 11:46 ` Boris Brezillon
2023-08-29 2:30 ` [Intel-gfx] " Dmitry Osipenko
2023-08-29 2:30 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 13/23] drm/shmem-helper: Use kref for pages_use_count Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 14/23] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 15/23] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 16/23] drm/shmem-helper: Use kref for vmap_use_count Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 10:00 ` [Intel-gfx] " Boris Brezillon
2023-08-28 10:00 ` Boris Brezillon
2023-09-02 20:22 ` [Intel-gfx] " Dmitry Osipenko
2023-09-02 20:22 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 17/23] drm/shmem-helper: Add and use drm_gem_shmem_resv_assert_held() helper Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-28 10:12 ` Boris Brezillon [this message]
2023-08-28 10:12 ` Boris Brezillon
2023-08-29 2:34 ` [Intel-gfx] " Dmitry Osipenko
2023-08-29 2:34 ` Dmitry Osipenko
2023-08-29 7:29 ` [Intel-gfx] " Boris Brezillon
2023-08-29 7:29 ` Boris Brezillon
2023-08-29 8:52 ` [Intel-gfx] " Christian König
2023-08-29 8:52 ` Christian König
2023-08-29 8:52 ` Christian König via Virtualization
2023-08-29 9:44 ` [Intel-gfx] " Boris Brezillon
2023-08-29 9:44 ` Boris Brezillon
2023-08-29 10:21 ` [Intel-gfx] " Boris Brezillon
2023-08-29 10:21 ` Boris Brezillon
2023-09-02 19:43 ` [Intel-gfx] " Dmitry Osipenko
2023-09-02 19:43 ` Dmitry Osipenko
2023-09-04 8:36 ` [Intel-gfx] " Boris Brezillon
2023-09-04 8:36 ` Boris Brezillon
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 18/23] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 19/23] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 20/23] drm/virtio: Pin display framebuffer BO Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 21/23] drm/virtio: Attach shmem BOs dynamically Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 22/23] drm/virtio: Support memory shrinking Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 17:54 ` [Intel-gfx] [PATCH v15 23/23] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2023-08-27 17:54 ` Dmitry Osipenko
2023-08-27 18:44 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers (rev3) Patchwork
2023-08-27 18:44 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2023-08-27 19:01 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-08-27 20:23 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2023-08-28 14:37 ` [Intel-gfx] [PATCH v15 00/23] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Helen Mae Koike Fornazier
2023-08-28 14:37 ` Helen Mae Koike Fornazier
2023-08-28 15:24 ` [Intel-gfx] " Helen Mae Koike Fornazier
2023-08-28 15:24 ` Helen Mae Koike Fornazier
2023-08-29 2:36 ` [Intel-gfx] " Dmitry Osipenko
2023-08-29 2:36 ` Dmitry Osipenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230828121239.78a180e6@collabora.com \
--to=boris.brezillon@collabora.com \
--cc=airlied@gmail.com \
--cc=boqun.feng@gmail.com \
--cc=christian.koenig@amd.com \
--cc=daniel@ffwll.ch \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=emma@anholt.net \
--cc=gurchetansingh@chromium.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=kernel@collabora.com \
--cc=kraxel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mripard@kernel.org \
--cc=mwen@igalia.com \
--cc=olvaffe@gmail.com \
--cc=peterz@infradead.org \
--cc=steven.price@arm.com \
--cc=tzimmermann@suse.de \
--cc=virtualization@lists.linux-foundation.org \
--cc=will@kernel.org \
--cc=yuq825@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.