From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 6/8] drm/i915: Pin the pages first in shmem prepare read/write
Date: Thu, 09 Jun 2016 16:06:59 +0300 [thread overview]
Message-ID: <1465477619.9670.31.camel@linux.intel.com> (raw)
In-Reply-To: <1465471779-20765-7-git-send-email-chris@chris-wilson.co.uk>
On to, 2016-06-09 at 12:29 +0100, Chris Wilson wrote:
> There is an improbable, but not impossible, case that if we leave the
> pages unpin as we operate on the object, then somebody may steal the
> lock and change the cache domains after we have already inspected them.
>
Which lock exactly?
> (Whilst here, avail ourselves of the opportunity to take a couple of
> steps to make the two functions look more similar.)
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/i915_gem.c | 88 ++++++++++++++++++++++++-----------------
> 1 file changed, 51 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index ffe3d3e9d69d..8c90b6a12d45 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -571,13 +571,22 @@ int i915_gem_obj_prepare_shmem_read(struct drm_i915_gem_object *obj,
> if ((obj->ops->flags & I915_GEM_OBJECT_HAS_STRUCT_PAGE) == 0)
> return -EINVAL;
>
> + ret = i915_gem_object_get_pages(obj);
> + if (ret)
> + return ret;
> +
There's still time for the pages to disappear at this point if somebody
is racing with us, and BUG_ON will be imminent. So I'm not sure which
lock you mean.
> + i915_gem_object_pin_pages(obj);
> +
> + if (obj->base.write_domain == I915_GEM_DOMAIN_CPU)
> + goto out;
> +
> + ret = i915_gem_object_wait_rendering(obj, true);
> + if (ret)
> + goto err_unpin;
> +
> i915_gem_object_flush_gtt_write_domain(obj);
>
> if (!(obj->base.read_domains & I915_GEM_DOMAIN_CPU)) {
> - ret = i915_gem_object_wait_rendering(obj, true);
> - if (ret)
> - return ret;
> -
> /* If we're not in the cpu read domain, set ourself into the gtt
> * read domain and manually flush cachelines (if required). This
> * optimizes for the case when the gpu will dirty the data
> @@ -586,26 +595,25 @@ int i915_gem_obj_prepare_shmem_read(struct drm_i915_gem_object *obj,
> obj->cache_level);
> }
>
> - ret = i915_gem_object_get_pages(obj);
> - if (ret)
> - return ret;
> -
> - i915_gem_object_pin_pages(obj);
> -
> if (*needs_clflush && !boot_cpu_has(X86_FEATURE_CLFLUSH)) {
> ret = i915_gem_object_set_to_cpu_domain(obj, false);
> - if (ret) {
> - i915_gem_object_unpin_pages(obj);
> - return ret;
> - }
> + if (ret)
> + goto err_unpin;
> +
> *needs_clflush = 0;
> }
>
> +out:
> + /* return with the pages pinned */
> return 0;
> +
> +err_unpin:
> + i915_gem_object_unpin_pages(obj);
> + return ret;
> }
>
> int i915_gem_obj_prepare_shmem_write(struct drm_i915_gem_object *obj,
> - unsigned *needs_clflush)
> + unsigned *needs_clflush)
> {
> int ret;
>
> @@ -613,20 +621,27 @@ int i915_gem_obj_prepare_shmem_write(struct drm_i915_gem_object *obj,
> if ((obj->ops->flags & I915_GEM_OBJECT_HAS_STRUCT_PAGE) == 0)
> return -EINVAL;
>
> - i915_gem_object_flush_gtt_write_domain(obj);
> + ret = i915_gem_object_get_pages(obj);
> + if (ret)
> + return ret;
>
> - if (obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
> - ret = i915_gem_object_wait_rendering(obj, false);
> - if (ret)
> - return ret;
> + i915_gem_object_pin_pages(obj);
>
> - /* If we're not in the cpu write domain, set ourself into the
> - * gtt write domain and manually flush cachelines (as required).
> - * This optimizes for the case when the gpu will use the data
> - * right away and we therefore have to clflush anyway.
> - */
> - *needs_clflush |= cpu_write_needs_clflush(obj) << 1;
> - }
> + if (obj->base.write_domain == I915_GEM_DOMAIN_CPU)
> + goto out;
> +
> + ret = i915_gem_object_wait_rendering(obj, false);
> + if (ret)
> + goto err_unpin;
> +
> + i915_gem_object_flush_gtt_write_domain(obj);
> +
> + /* If we're not in the cpu write domain, set ourself into the
> + * gtt write domain and manually flush cachelines (as required).
> + * This optimizes for the case when the gpu will use the data
> + * right away and we therefore have to clflush anyway.
> + */
> + *needs_clflush |= cpu_write_needs_clflush(obj) << 1;
Use ?: to keep the code readable.
>
> /* Same trick applies to invalidate partially written cachelines read
> * before writing.
> @@ -635,27 +650,26 @@ int i915_gem_obj_prepare_shmem_write(struct drm_i915_gem_object *obj,
> *needs_clflush |= !cpu_cache_is_coherent(obj->base.dev,
> obj->cache_level);
>
> - ret = i915_gem_object_get_pages(obj);
> - if (ret)
> - return ret;
> -
> - i915_gem_object_pin_pages(obj);
> -
> if (*needs_clflush && !boot_cpu_has(X86_FEATURE_CLFLUSH)) {
> ret = i915_gem_object_set_to_cpu_domain(obj, true);
> - if (ret) {
> - i915_gem_object_unpin_pages(obj);
> - return ret;
> - }
> + if (ret)
> + goto err_unpin;
> +
> *needs_clflush = 0;
> }
>
> if ((*needs_clflush & CLFLUSH_AFTER) == 0)
> obj->cache_dirty = true;
>
> +out:
> intel_fb_obj_invalidate(obj, ORIGIN_CPU);
> obj->dirty = 1;
> + /* return with the pages pinned */
> return 0;
> +
> +err_unpin:
> + i915_gem_object_unpin_pages(obj);
> + return ret;
> }
>
> /* Per-page copy function for the shmem pread fastpath.
--
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-06-09 13:08 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-09 11:29 Start tidying up execbuf relocations Chris Wilson
2016-06-09 11:29 ` [PATCH 1/8] drm/i915: Print the batchbuffer offset next to BBADDR in error state Chris Wilson
2016-06-09 12:29 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 2/8] drm/i915: Cache kmap between relocations Chris Wilson
2016-06-09 12:25 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 3/8] drm/i915: Extract i915_gem_obj_prepare_shmem_write() Chris Wilson
2016-06-09 11:39 ` Chris Wilson
2016-06-09 12:47 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 4/8] drm/i915: Before accessing an object via the cpu, flush GTT writes Chris Wilson
2016-06-09 12:50 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 5/8] drm/i915: Wait for writes through the GTT to land before reading back Chris Wilson
2016-06-09 11:36 ` Chris Wilson
2016-06-09 12:54 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 6/8] drm/i915: Pin the pages first in shmem prepare read/write Chris Wilson
2016-06-09 13:06 ` Joonas Lahtinen [this message]
2016-06-09 13:35 ` Chris Wilson
2016-06-09 13:51 ` Joonas Lahtinen
2016-06-09 14:13 ` Chris Wilson
2016-06-09 11:29 ` [PATCH 7/8] drm/i915: Tidy up flush cpu/gtt write domains Chris Wilson
2016-06-09 13:12 ` Joonas Lahtinen
2016-06-09 11:29 ` [PATCH 8/8] drm/i915: Refactor execbuffer relocation writing Chris Wilson
2016-06-09 13:31 ` Joonas Lahtinen
2016-06-09 14:22 ` Chris Wilson
2016-06-09 11:40 ` ✗ Ro.CI.BAT: failure for series starting with [1/8] drm/i915: Print the batchbuffer offset next to BBADDR in error state Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465477619.9670.31.camel@linux.intel.com \
--to=joonas.lahtinen@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox