From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
matthew.auld@intel.com
Subject: [Intel-gfx] [PATCH v6 3/6] drm/i915: Don't pin the object pages during pending vma binds
Date: Fri, 7 Jan 2022 15:23:40 +0100 [thread overview]
Message-ID: <20220107142343.56811-4-thomas.hellstrom@linux.intel.com> (raw)
In-Reply-To: <20220107142343.56811-1-thomas.hellstrom@linux.intel.com>
A pin-count is already held by vma->pages so taking an additional pin
during async binds is not necessary.
When we introduce async unbinding we have other means of keeping the
object pages alive.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_vma.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 1d4e448d22d9..8fa3e0b2fe26 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -305,10 +305,8 @@ static void __vma_release(struct dma_fence_work *work)
{
struct i915_vma_work *vw = container_of(work, typeof(*vw), base);
- if (vw->pinned) {
- __i915_gem_object_unpin_pages(vw->pinned);
+ if (vw->pinned)
i915_gem_object_put(vw->pinned);
- }
i915_vm_free_pt_stash(vw->vm, &vw->stash);
i915_vm_put(vw->vm);
@@ -477,7 +475,6 @@ int i915_vma_bind(struct i915_vma *vma,
work->base.dma.error = 0; /* enable the queue_work() */
- __i915_gem_object_pin_pages(vma->obj);
work->pinned = i915_gem_object_get(vma->obj);
} else {
if (vma->obj) {
--
2.31.1
next prev parent reply other threads:[~2022-01-07 14:24 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-07 14:23 [Intel-gfx] [PATCH v6 0/6] drm/i915: Asynchronous vma unbinding Thomas Hellström
2022-01-07 14:23 ` [Intel-gfx] [PATCH v6 1/6] drm/i915: Initial introduction of vma resources Thomas Hellström
2022-01-07 14:23 ` [Intel-gfx] [PATCH v6 2/6] drm/i915: Use the vma resource as argument for gtt binding / unbinding Thomas Hellström
2022-01-07 14:23 ` Thomas Hellström [this message]
2022-01-07 14:23 ` [Intel-gfx] [PATCH v6 4/6] drm/i915: Use vma resources for async unbinding Thomas Hellström
2022-01-10 13:21 ` Matthew Auld
2022-01-10 14:49 ` Thomas Hellström
2022-01-07 14:23 ` [Intel-gfx] [PATCH v6 5/6] drm/i915: Asynchronous migration selftest Thomas Hellström
2022-01-10 13:59 ` Matthew Auld
2022-01-10 14:36 ` Thomas Hellström
2022-01-10 14:38 ` Matthew Auld
2022-01-07 14:23 ` [Intel-gfx] [PATCH v6 6/6] drm/i915: Use struct vma_resource instead of struct vma_snapshot Thomas Hellström
2022-01-10 14:21 ` Matthew Auld
2022-01-07 14:39 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Asynchronous vma unbinding (rev6) Patchwork
2022-01-07 14:40 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2022-01-07 15:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-01-07 18:40 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220107142343.56811-4-thomas.hellstrom@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox