Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Adrian Larumbe <adrian.larumbe@collabora.com>
To: daniel@ffwll.ch, thomas.hellstrom@linux.intel.com,
	intel-gfx@lists.freedesktop.org
Cc: adrian.larumbe@collabora.com
Subject: [Intel-gfx] [PATCH 0/1] Replace shmem memory region and object backend with TTM
Date: Wed, 27 Apr 2022 12:34:03 +0100	[thread overview]
Message-ID: <20220427113404.401741-1-adrian.larumbe@collabora.com> (raw)

This patch is an attempt at eliminating the old shmem memory region and GEM
object backend, in favour of a TTM-based one that is able to manage objects
placed on both system and local memory.

Known issues:

Many GPU hungs in machines of GEN <= 5. My assumption is this has something
 to do with a caching issues, but everywhere across the TTM backend code
 I've tried to handle object creation and getting its pages with the same
 set of caching and coherency properties as in the old shmem backend.

Object passed to shmem_create_from_object somehow not being flushed after
 being written into at lrc_init_state. Seems thatwith the new backend and
 when pinning an intel_context, either i915_gem_object_pin_map is not
 creating a kernel mapping with the right caching properties or else
 flushing it afterwards doesn't do anything.

 This leads to a GPU hung because the engine's default state that is read
 with shmem_read doesn't reflect what had been written into it previously
 by vmap'ing the object's pages. The only workaround I could find was
 manually setting the shmem file's pages dirty and putting them back, but
 this looks hacky and wasteful for big BO's

Besides all this, I haven't yet implemented the pread callback for TTM
object backend, as it seems CI's BAT test list doesn't include it.

Adrian Larumbe (1):
  drm/i915: Replace shmem memory region and object backend with TTM

 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c   |  12 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c     |  32 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c     |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c    | 397 +------------------
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      | 212 +++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |   3 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c |  11 +-
 drivers/gpu/drm/i915/gt/shmem_utils.c        |  64 ++-
 drivers/gpu/drm/i915/intel_memory_region.c   |   7 +-
 10 files changed, 333 insertions(+), 412 deletions(-)

-- 
2.35.1


             reply	other threads:[~2022-04-28  8:39 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-27 11:34 Adrian Larumbe [this message]
2022-04-27 11:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Replace shmem memory region and object backend with TTM Adrian Larumbe
2022-04-28 18:04   ` Matthew Auld
2022-04-28  8:54 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2022-04-28  9:17 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2022-04-29  9:14 ` [Intel-gfx] [PATCH 0/1] " Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220427113404.401741-1-adrian.larumbe@collabora.com \
    --to=adrian.larumbe@collabora.com \
    --cc=daniel@ffwll.ch \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox