From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: yu.dai@intel.com, intel-gfx@lists.freedesktop.org
Cc: daniel.vetter@ffwll.ch
Subject: Re: [PATCH v2 1/2] drm/i915: Add i915_gem_object_vmap to map GEM object to virtual space
Date: Fri, 19 Feb 2016 11:07:46 +0000 [thread overview]
Message-ID: <56C6F782.1090102@linux.intel.com> (raw)
In-Reply-To: <1455820298-5463-2-git-send-email-yu.dai@intel.com>
On 18/02/16 18:31, yu.dai@intel.com wrote:
> From: Alex Dai <yu.dai@intel.com>
>
> There are several places inside driver where a GEM object is mapped to
> kernel virtual space. The mapping is either done for the whole object
> or certain page range of it.
>
> This patch introduces a function i915_gem_object_vmap to do such job.
>
> v2: Use obj->pages->nents for iteration within i915_gem_object_vmap;
> break when it finishes all desired pages. The caller need to pass
> in actual page number. (Tvrtko Ursulin)
Look OK to me. Just one more thing, it would be good to add a WARN_ON
and bail out if pages are not pinned. Just because the function is now
public and that is a bit stronger than kerneldoc.
With that added:
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Regards,
Tvrtko
> Signed-off-by: Alex Dai <yu.dai@intel.com>
> Cc: Dave Gordon <david.s.gordon@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Alex Dai <yu.dai@intel.com>
> ---
> drivers/gpu/drm/i915/i915_cmd_parser.c | 28 +-------------------
> drivers/gpu/drm/i915/i915_drv.h | 3 +++
> drivers/gpu/drm/i915/i915_gem.c | 47 +++++++++++++++++++++++++++++++++
> drivers/gpu/drm/i915/i915_gem_dmabuf.c | 16 +++--------
> drivers/gpu/drm/i915/intel_ringbuffer.c | 24 ++---------------
> 5 files changed, 56 insertions(+), 62 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
> index 814d894..915e8c1 100644
> --- a/drivers/gpu/drm/i915/i915_cmd_parser.c
> +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
> @@ -863,37 +863,11 @@ find_reg(const struct drm_i915_reg_descriptor *table,
> static u32 *vmap_batch(struct drm_i915_gem_object *obj,
> unsigned start, unsigned len)
> {
> - int i;
> - void *addr = NULL;
> - struct sg_page_iter sg_iter;
> int first_page = start >> PAGE_SHIFT;
> int last_page = (len + start + 4095) >> PAGE_SHIFT;
> int npages = last_page - first_page;
> - struct page **pages;
> -
> - pages = drm_malloc_ab(npages, sizeof(*pages));
> - if (pages == NULL) {
> - DRM_DEBUG_DRIVER("Failed to get space for pages\n");
> - goto finish;
> - }
> -
> - i = 0;
> - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) {
> - pages[i++] = sg_page_iter_page(&sg_iter);
> - if (i == npages)
> - break;
> - }
> -
> - addr = vmap(pages, i, 0, PAGE_KERNEL);
> - if (addr == NULL) {
> - DRM_DEBUG_DRIVER("Failed to vmap pages\n");
> - goto finish;
> - }
>
> -finish:
> - if (pages)
> - drm_free_large(pages);
> - return (u32*)addr;
> + return (u32*)i915_gem_object_vmap(obj, first_page, npages);
> }
>
> /* Returns a vmap'd pointer to dest_obj, which the caller must unmap */
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 6644c2e..5b00a6a 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2899,6 +2899,9 @@ struct drm_i915_gem_object *i915_gem_object_create_from_data(
> struct drm_device *dev, const void *data, size_t size);
> void i915_gem_free_object(struct drm_gem_object *obj);
> void i915_gem_vma_destroy(struct i915_vma *vma);
> +void *i915_gem_object_vmap(struct drm_i915_gem_object *obj,
> + unsigned int first,
> + unsigned int npages);
>
> /* Flags used by pin/bind&friends. */
> #define PIN_MAPPABLE (1<<0)
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index f68f346..4bc0ce7 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -5356,3 +5356,50 @@ fail:
> drm_gem_object_unreference(&obj->base);
> return ERR_PTR(ret);
> }
> +
> +/**
> + * i915_gem_object_vmap - map a GEM obj into kernel virtual space
> + * @obj: the GEM obj to be mapped
> + * @first: index of the first page where mapping starts
> + * @npages: how many pages to be mapped, starting from first page
> + *
> + * Map a given page range of GEM obj into kernel virtual space. The caller must
> + * make sure the associated pages are gathered and pinned before calling this
> + * function. vunmap should be called after use.
> + *
> + * NULL will be returned if fails.
> + */
> +void *i915_gem_object_vmap(struct drm_i915_gem_object *obj,
> + unsigned int first,
> + unsigned int npages)
> +{
> + struct sg_page_iter sg_iter;
> + struct page **pages;
> + void *addr;
> + int i;
> +
> + if (first + npages > obj->pages->nents) {
> + DRM_DEBUG_DRIVER("Invalid page count\n");
> + return NULL;
> + }
> +
> + pages = drm_malloc_ab(npages, sizeof(*pages));
> + if (pages == NULL) {
> + DRM_DEBUG_DRIVER("Failed to get space for pages\n");
> + return NULL;
> + }
> +
> + i = 0;
> + for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first) {
> + pages[i++] = sg_page_iter_page(&sg_iter);
> + if (i == npages)
> + break;
> + }
> +
> + addr = vmap(pages, npages, 0, PAGE_KERNEL);
> + if (addr == NULL)
> + DRM_DEBUG_DRIVER("Failed to vmap pages\n");
> + drm_free_large(pages);
> +
> + return addr;
> +}
> diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> index 1f3eef6..6133036 100644
> --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> @@ -110,9 +110,7 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> {
> struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> struct drm_device *dev = obj->base.dev;
> - struct sg_page_iter sg_iter;
> - struct page **pages;
> - int ret, i;
> + int ret;
>
> ret = i915_mutex_lock_interruptible(dev);
> if (ret)
> @@ -131,16 +129,8 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
>
> ret = -ENOMEM;
>
> - pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
> - if (pages == NULL)
> - goto err_unpin;
> -
> - i = 0;
> - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
> - pages[i++] = sg_page_iter_page(&sg_iter);
> -
> - obj->dma_buf_vmapping = vmap(pages, i, 0, PAGE_KERNEL);
> - drm_free_large(pages);
> + obj->dma_buf_vmapping = i915_gem_object_vmap(obj, 0,
> + dma_buf->size >> PAGE_SHIFT);
>
> if (!obj->dma_buf_vmapping)
> goto err_unpin;
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index 45ce45a..93666e9 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -2064,27 +2064,6 @@ void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
> i915_gem_object_ggtt_unpin(ringbuf->obj);
> }
>
> -static u32 *vmap_obj(struct drm_i915_gem_object *obj)
> -{
> - struct sg_page_iter sg_iter;
> - struct page **pages;
> - void *addr;
> - int i;
> -
> - pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
> - if (pages == NULL)
> - return NULL;
> -
> - i = 0;
> - for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
> - pages[i++] = sg_page_iter_page(&sg_iter);
> -
> - addr = vmap(pages, i, 0, PAGE_KERNEL);
> - drm_free_large(pages);
> -
> - return addr;
> -}
> -
> int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
> struct intel_ringbuffer *ringbuf)
> {
> @@ -2103,7 +2082,8 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
> return ret;
> }
>
> - ringbuf->virtual_start = vmap_obj(obj);
> + ringbuf->virtual_start = i915_gem_object_vmap(obj, 0,
> + ringbuf->size >> PAGE_SHIFT);
> if (ringbuf->virtual_start == NULL) {
> i915_gem_object_ggtt_unpin(obj);
> return -ENOMEM;
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-02-19 11:07 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-18 18:31 [PATCH v2 0/2] Add i915_gem_object_vmap yu.dai
2016-02-18 18:31 ` [PATCH v2 1/2] drm/i915: Add i915_gem_object_vmap to map GEM object to virtual space yu.dai
2016-02-18 21:05 ` Chris Wilson
2016-02-18 21:30 ` Yu Dai
2016-02-19 11:07 ` Tvrtko Ursulin [this message]
2016-02-29 12:03 ` Tvrtko Ursulin
2016-02-18 18:31 ` [PATCH v2 2/2] drm/i915/guc: Simplify code by keeping vmap of guc_client object yu.dai
2016-02-19 11:10 ` Tvrtko Ursulin
2016-02-19 8:06 ` ✗ Fi.CI.BAT: failure for Add i915_gem_object_vmap (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56C6F782.1090102@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=daniel.vetter@ffwll.ch \
--cc=intel-gfx@lists.freedesktop.org \
--cc=yu.dai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).