From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 01/62] drm/i915: Only start retire worker when idle
Date: Tue, 07 Jun 2016 14:31:07 +0300 [thread overview]
Message-ID: <1465299067.5626.15.camel@linux.intel.com> (raw)
In-Reply-To: <1464971847-15809-2-git-send-email-chris@chris-wilson.co.uk>
On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> The retire worker is a low frequency task that makes sure we retire
> outstanding requests if userspace is being lax. We only need to start it
> once as it remains active until the GPU is idle, so do a cheap test
> before the more expensive queue_work(). A consequence of this is that we
> need correct locking in the worker to make the hot path of request
> submission cheap. To keep the symmetry and keep hangcheck strictly bound
> by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle
> worker before dropping the wakelock.
>
> v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick
> the waiter.
> v3: Remove the wakeref assertion squelching (now we hold a wakeref for
> the hangcheck, any rpm error there is genuine).
> v4: To prevent excess work when retiring requests, we split the busy
> flag into two, a boolean to denote whether we hold the wakeref and a
> bitmask of active engines.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=88437
> ---
> drivers/gpu/drm/i915/i915_debugfs.c | 5 +-
> drivers/gpu/drm/i915/i915_drv.c | 2 -
> drivers/gpu/drm/i915/i915_drv.h | 56 +++++++-------
> drivers/gpu/drm/i915/i915_gem.c | 114 ++++++++++++++++++-----------
> drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 ++
> drivers/gpu/drm/i915/i915_irq.c | 15 +---
> drivers/gpu/drm/i915/intel_display.c | 26 -------
> drivers/gpu/drm/i915/intel_pm.c | 2 +-
> drivers/gpu/drm/i915/intel_ringbuffer.h | 4 +-
> 9 files changed, 115 insertions(+), 115 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 72dae6fb0aa2..dd6cf222e8f5 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -2437,7 +2437,8 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
> struct drm_file *file;
>
> seq_printf(m, "RPS enabled? %d\n", dev_priv->rps.enabled);
> - seq_printf(m, "GPU busy? %d\n", dev_priv->mm.busy);
> + seq_printf(m, "GPU busy? %s [%x]\n",
> + yesno(dev_priv->gt.awake), dev_priv->gt.active_engines);
> seq_printf(m, "CPU waiting? %d\n", count_irq_waiters(dev_priv));
> seq_printf(m, "Frequency requested %d; min hard:%d, soft:%d; max soft:%d, hard:%d\n",
> intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq),
> @@ -2777,7 +2778,7 @@ static int i915_runtime_pm_status(struct seq_file *m, void *unused)
> if (!HAS_RUNTIME_PM(dev_priv))
> seq_puts(m, "Runtime power management not supported\n");
>
> - seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->mm.busy));
> + seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->gt.awake));
> seq_printf(m, "IRQs disabled: %s\n",
> yesno(!intel_irqs_enabled(dev_priv)));
> #ifdef CONFIG_PM
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 3c8c75c77574..5f7208d2fdbf 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -2697,8 +2697,6 @@ static int intel_runtime_suspend(struct device *device)
> i915_gem_release_all_mmaps(dev_priv);
> mutex_unlock(&dev->struct_mutex);
>
> - cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> -
> intel_guc_suspend(dev);
>
> intel_suspend_gt_powersave(dev_priv);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 88d9242398ce..3f075adf9e84 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1305,37 +1305,11 @@ struct i915_gem_mm {
> struct list_head fence_list;
>
> /**
> - * We leave the user IRQ off as much as possible,
> - * but this means that requests will finish and never
> - * be retired once the system goes idle. Set a timer to
> - * fire periodically while the ring is running. When it
> - * fires, go retire requests.
> - */
> - struct delayed_work retire_work;
> -
> - /**
> - * When we detect an idle GPU, we want to turn on
> - * powersaving features. So once we see that there
> - * are no more requests outstanding and no more
> - * arrive within a small period of time, we fire
> - * off the idle_work.
> - */
> - struct delayed_work idle_work;
> -
> - /**
> * Are we in a non-interruptible section of code like
> * modesetting?
> */
> bool interruptible;
>
> - /**
> - * Is the GPU currently considered idle, or busy executing userspace
> - * requests? Whilst idle, we attempt to power down the hardware and
> - * display clocks. In order to reduce the effect on performance, there
> - * is a slight delay before we do so.
> - */
> - bool busy;
> -
> /* the indicator for dispatch video commands on two BSD rings */
> unsigned int bsd_ring_dispatch_index;
>
> @@ -2034,6 +2008,34 @@ struct drm_i915_private {
> int (*init_engines)(struct drm_device *dev);
> void (*cleanup_engine)(struct intel_engine_cs *engine);
> void (*stop_engine)(struct intel_engine_cs *engine);
> +
> + /**
> + * Is the GPU currently considered idle, or busy executing
> + * userspace requests? Whilst idle, we allow runtime power
> + * management to power down the hardware and display clocks.
> + * In order to reduce the effect on performance, there
> + * is a slight delay before we do so.
> + */
> + unsigned active_engines;
> + bool awake;
> +
> + /**
> + * We leave the user IRQ off as much as possible,
> + * but this means that requests will finish and never
> + * be retired once the system goes idle. Set a timer to
> + * fire periodically while the ring is running. When it
> + * fires, go retire requests.
> + */
> + struct delayed_work retire_work;
> +
> + /**
> + * When we detect an idle GPU, we want to turn on
> + * powersaving features. So once we see that there
> + * are no more requests outstanding and no more
> + * arrive within a small period of time, we fire
> + * off the idle_work.
> + */
> + struct delayed_work idle_work;
Code motion would be cool in separate patches, but well it's a 62 patch
series already.
> } gt;
>
> /* perform PHY state sanity checks? */
> @@ -3247,7 +3249,7 @@ int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
> struct drm_i915_gem_request *
> i915_gem_find_active_request(struct intel_engine_cs *engine);
>
> -bool i915_gem_retire_requests(struct drm_i915_private *dev_priv);
> +void i915_gem_retire_requests(struct drm_i915_private *dev_priv);
> void i915_gem_retire_requests_ring(struct intel_engine_cs *engine);
>
> static inline u32 i915_reset_counter(struct i915_gpu_error *error)
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index f4e550ddaa5d..5a7131b749a2 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2554,6 +2554,26 @@ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
> return 0;
> }
>
> +static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
> + const struct intel_engine_cs *engine)
> +{
> + dev_priv->gt.active_engines |= intel_engine_flag(engine);
> + if (dev_priv->gt.awake)
> + return;
> +
> + intel_runtime_pm_get_noresume(dev_priv);
> + dev_priv->gt.awake = true;
> +
> + intel_enable_gt_powersave(dev_priv);
> + i915_update_gfx_val(dev_priv);
> + if (INTEL_INFO(dev_priv)->gen >= 6)
> + gen6_rps_busy(dev_priv);
> +
> + queue_delayed_work(dev_priv->wq,
> + &dev_priv->gt.retire_work,
> + round_jiffies_up_relative(HZ));
> +}
> +
> /*
> * NB: This function is not allowed to fail. Doing so would mean the the
> * request is not being tracked for completion but the work itself is
> @@ -2640,12 +2660,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
> }
> /* Not allowed to fail! */
> WARN(ret, "emit|add_request failed: %d!\n", ret);
> -
> - queue_delayed_work(dev_priv->wq,
> - &dev_priv->mm.retire_work,
> - round_jiffies_up_relative(HZ));
> - intel_mark_busy(dev_priv);
> -
> /* Sanity check that the reserved size was large enough. */
> ret = intel_ring_get_tail(ringbuf) - request_start;
> if (ret < 0)
> @@ -2654,6 +2668,8 @@ void __i915_add_request(struct drm_i915_gem_request *request,
> "Not enough space reserved (%d bytes) "
> "for adding the request (%d bytes)\n",
> reserved_tail, ret);
> +
> + i915_gem_mark_busy(dev_priv, engine);
> }
>
> static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
> @@ -2968,46 +2984,47 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
> WARN_ON(i915_verify_lists(engine->dev));
> }
>
> -bool
> -i915_gem_retire_requests(struct drm_i915_private *dev_priv)
> +void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
> {
> struct intel_engine_cs *engine;
> - bool idle = true;
> +
> + if (dev_priv->gt.active_engines == 0)
> + return;
> +
> + GEM_BUG_ON(!dev_priv->gt.awake);
>
> for_each_engine(engine, dev_priv) {
> i915_gem_retire_requests_ring(engine);
> - idle &= list_empty(&engine->request_list);
> - if (i915.enable_execlists) {
> - spin_lock_bh(&engine->execlist_lock);
> - idle &= list_empty(&engine->execlist_queue);
> - spin_unlock_bh(&engine->execlist_lock);
> - }
As discussed in IRC, this disappearing could be mentioned in the commit
message.
> + if (list_empty(&engine->request_list))
> + dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
> }
>
> - if (idle)
> + if (dev_priv->gt.active_engines == 0)
> mod_delayed_work(dev_priv->wq,
> - &dev_priv->mm.idle_work,
> + &dev_priv->gt.idle_work,
> msecs_to_jiffies(100));
> -
> - return idle;
> }
>
> static void
> i915_gem_retire_work_handler(struct work_struct *work)
> {
> struct drm_i915_private *dev_priv =
> - container_of(work, typeof(*dev_priv), mm.retire_work.work);
> + container_of(work, typeof(*dev_priv), gt.retire_work.work);
> struct drm_device *dev = dev_priv->dev;
> - bool idle;
>
> /* Come back later if the device is busy... */
> - idle = false;
> if (mutex_trylock(&dev->struct_mutex)) {
> - idle = i915_gem_retire_requests(dev_priv);
> + i915_gem_retire_requests(dev_priv);
> mutex_unlock(&dev->struct_mutex);
> }
> - if (!idle)
> - queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
> +
> + /* Keep the retire handler running until we are finally idle.
> + * We do not need to do this test under locking as in the worst-case
> + * we queue the retire worker once too often.
> + */
> + if (READ_ONCE(dev_priv->gt.awake))
This is the only occurrance in this function, so don't think we need
READ_ONCE. Not sure if READ_ONCE is good documentation of "read outside
lock", comment might be better.
> + queue_delayed_work(dev_priv->wq,
> + &dev_priv->gt.retire_work,
> round_jiffies_up_relative(HZ));
> }
>
> @@ -3015,25 +3032,36 @@ static void
> i915_gem_idle_work_handler(struct work_struct *work)
> {
> struct drm_i915_private *dev_priv =
> - container_of(work, typeof(*dev_priv), mm.idle_work.work);
> + container_of(work, typeof(*dev_priv), gt.idle_work.work);
> struct drm_device *dev = dev_priv->dev;
> struct intel_engine_cs *engine;
>
> - for_each_engine(engine, dev_priv)
> - if (!list_empty(&engine->request_list))
> - return;
> + if (!READ_ONCE(dev_priv->gt.awake))
> + return;
>
> - /* we probably should sync with hangcheck here, using cancel_work_sync.
> - * Also locking seems to be fubar here, engine->request_list is protected
> - * by dev->struct_mutex. */
> + mutex_lock(&dev->struct_mutex);
> + if (dev_priv->gt.active_engines)
> + goto out;
>
> - intel_mark_idle(dev_priv);
> + for_each_engine(engine, dev_priv)
> + i915_gem_batch_pool_fini(&engine->batch_pool);
>
> - if (mutex_trylock(&dev->struct_mutex)) {
> - for_each_engine(engine, dev_priv)
> - i915_gem_batch_pool_fini(&engine->batch_pool);
> + GEM_BUG_ON(!dev_priv->gt.awake);
> + dev_priv->gt.awake = false;
>
> - mutex_unlock(&dev->struct_mutex);
> + if (INTEL_INFO(dev_priv)->gen >= 6)
> + gen6_rps_idle(dev_priv);
> + intel_runtime_pm_put(dev_priv);
> +out:
> + mutex_unlock(&dev->struct_mutex);
> +
> + if (!dev_priv->gt.awake &&
No READ_ONCE here, even we just unlocked the mutex. So lacks some
consistency.
Also, this assumes we might be pre-empted between unlocking mutex and
making this test, so I'm little bit confused. Do you want to optimize
by avoiding calling cancel_delayed_work_sync?
> + cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work)) {
> + unsigned stuck = intel_kick_waiters(dev_priv);
> + if (unlikely(stuck)) {
> + DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
> + dev_priv->gpu_error.missed_irq_rings |= stuck;
> + }
> }
> }
>
> @@ -4154,7 +4182,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
>
> ret = __i915_wait_request(target, true, NULL, NULL);
> if (ret == 0)
> - queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
> + queue_delayed_work(dev_priv->wq, &dev_priv->gt.retire_work, 0);
>
> i915_gem_request_unreference(target);
>
> @@ -4672,13 +4700,13 @@ i915_gem_suspend(struct drm_device *dev)
> mutex_unlock(&dev->struct_mutex);
>
> cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> - cancel_delayed_work_sync(&dev_priv->mm.retire_work);
> - flush_delayed_work(&dev_priv->mm.idle_work);
> + cancel_delayed_work_sync(&dev_priv->gt.retire_work);
> + flush_delayed_work(&dev_priv->gt.idle_work);
>
> /* Assert that we sucessfully flushed all the work and
> * reset the GPU back to its idle, low power state.
> */
> - WARN_ON(dev_priv->mm.busy);
> + WARN_ON(dev_priv->gt.awake);
>
> return 0;
>
> @@ -4982,9 +5010,9 @@ i915_gem_load_init(struct drm_device *dev)
> init_engine_lists(&dev_priv->engine[i]);
> for (i = 0; i < I915_MAX_NUM_FENCES; i++)
> INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);
> - INIT_DELAYED_WORK(&dev_priv->mm.retire_work,
> + INIT_DELAYED_WORK(&dev_priv->gt.retire_work,
> i915_gem_retire_work_handler);
> - INIT_DELAYED_WORK(&dev_priv->mm.idle_work,
> + INIT_DELAYED_WORK(&dev_priv->gt.idle_work,
> i915_gem_idle_work_handler);
> init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
> init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 8097698b9622..d3297dab0298 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1477,6 +1477,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
> dispatch_flags |= I915_DISPATCH_RS;
> }
>
> + /* Take a local wakeref for preparing to dispatch the execbuf as
> + * we expect to access the hardware fairly frequently in the
> + * process. Upon first dispatch, we acquire another prolonged
> + * wakeref that we hold until the GPU has been idle for at least
> + * 100ms.
> + */
> intel_runtime_pm_get(dev_priv);
>
> ret = i915_mutex_lock_interruptible(dev);
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index f74f5727ea77..7a2dc8f1f64e 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -3102,12 +3102,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
> if (!i915.enable_hangcheck)
> return;
>
> - /*
> - * The hangcheck work is synced during runtime suspend, we don't
> - * require a wakeref. TODO: instead of disabling the asserts make
> - * sure that we hold a reference when this work is running.
> - */
> - DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
> + if (!READ_ONCE(dev_priv->gt.awake))
> + return;
>
> /* As enabling the GPU requires fairly extensive mmio access,
> * periodically arm the mmio checker to see if we are triggering
> @@ -3215,17 +3211,12 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
> }
> }
>
> - if (rings_hung) {
> + if (rings_hung)
> i915_handle_error(dev_priv, rings_hung, "Engine(s) hung");
> - goto out;
> - }
>
> /* Reset timer in case GPU hangs without another request being added */
> if (busy_count)
> i915_queue_hangcheck(dev_priv);
> -
> -out:
> - ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
> }
>
> static void ibx_irq_reset(struct drm_device *dev)
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index bb09ee6d1a3f..14e41fdd8112 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -10969,32 +10969,6 @@ struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
> return mode;
> }
>
> -void intel_mark_busy(struct drm_i915_private *dev_priv)
> -{
> - if (dev_priv->mm.busy)
> - return;
> -
> - intel_runtime_pm_get(dev_priv);
> - intel_enable_gt_powersave(dev_priv);
> - i915_update_gfx_val(dev_priv);
> - if (INTEL_GEN(dev_priv) >= 6)
> - gen6_rps_busy(dev_priv);
> - dev_priv->mm.busy = true;
> -}
> -
> -void intel_mark_idle(struct drm_i915_private *dev_priv)
> -{
> - if (!dev_priv->mm.busy)
> - return;
> -
> - dev_priv->mm.busy = false;
> -
> - if (INTEL_GEN(dev_priv) >= 6)
> - gen6_rps_idle(dev_priv);
> -
> - intel_runtime_pm_put(dev_priv);
> -}
> -
> static void intel_crtc_destroy(struct drm_crtc *crtc)
> {
> struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 712bd0debb91..35bb9a23cd2d 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4850,7 +4850,7 @@ void gen6_rps_boost(struct drm_i915_private *dev_priv,
> /* This is intentionally racy! We peek at the state here, then
> * validate inside the RPS worker.
> */
> - if (!(dev_priv->mm.busy &&
> + if (!(dev_priv->gt.awake &&
> dev_priv->rps.enabled &&
> dev_priv->rps.cur_freq < dev_priv->rps.max_freq_softlimit))
> return;
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 166f1a3829b0..d0cd9a1aa80e 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -372,13 +372,13 @@ struct intel_engine_cs {
> };
>
> static inline bool
> -intel_engine_initialized(struct intel_engine_cs *engine)
> +intel_engine_initialized(const struct intel_engine_cs *engine)
> {
> return engine->i915 != NULL;
> }
>
> static inline unsigned
> -intel_engine_flag(struct intel_engine_cs *engine)
> +intel_engine_flag(const struct intel_engine_cs *engine)
> {
> return 1 << engine->id;
> }
I think majority of our functions are not const-correct, I remember
some grunting on the subject when I tried to change some to be. But I'm
all for it myself.
Regards, Joonas
--
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-06-07 11:31 UTC|newest]
Thread overview: 87+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
2016-06-07 11:31 ` Joonas Lahtinen [this message]
2016-06-08 10:53 ` Chris Wilson
2016-06-08 11:06 ` Chris Wilson
2016-06-08 12:07 ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 02/62] drm/i915: Do not keep postponing the idle-work Chris Wilson
2016-06-07 11:34 ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl Chris Wilson
2016-06-07 11:39 ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter Chris Wilson
2016-06-08 9:04 ` Daniel Vetter
2016-06-08 10:38 ` Chris Wilson
2016-06-03 16:36 ` [PATCH 05/62] drm/i915: Add background commentary to "waitboosting" Chris Wilson
2016-06-03 16:36 ` [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles Chris Wilson
2016-06-16 8:49 ` Michał Winiarski
2016-06-16 11:09 ` Chris Wilson
2016-06-03 16:36 ` [PATCH 07/62] drm/i915: Remove temporary RPM wakeref assert disables Chris Wilson
2016-06-03 16:36 ` [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface Chris Wilson
2016-06-08 11:50 ` Arun Siluvery
2016-06-03 16:36 ` [PATCH 09/62] drm/i915: Record the ringbuffer associated with the request Chris Wilson
2016-06-03 16:36 ` [PATCH 10/62] drm/i915: Allow userspace to request no-error-capture upon GPU hangs Chris Wilson
2016-06-03 16:36 ` [PATCH 11/62] drm/i915: Clean up GPU hang message Chris Wilson
2016-06-14 8:13 ` Mika Kuoppala
2016-06-03 16:36 ` [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one Chris Wilson
2016-06-08 11:14 ` Arun Siluvery
2016-06-08 12:06 ` Chris Wilson
2016-06-03 16:36 ` [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence Chris Wilson
2016-06-08 9:14 ` Daniel Vetter
2016-06-08 10:33 ` Chris Wilson
2016-06-03 16:36 ` [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put Chris Wilson
2016-06-08 9:15 ` Daniel Vetter
2016-06-03 16:36 ` [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference() Chris Wilson
2016-06-06 12:12 ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 16/62] drm/i915: Wrap drm_gem_object_lookup in i915_gem_object_lookup Chris Wilson
2016-06-03 16:36 ` [PATCH 17/62] drm/i915: Wrap drm_gem_object_reference in i915_gem_object_get Chris Wilson
2016-06-03 16:36 ` [PATCH 18/62] drm/i915: Rename drm_gem_object_unreference in preparation for lockless free Chris Wilson
2016-06-03 16:36 ` [PATCH 19/62] drm/i915: Rename drm_gem_object_unreference_unlocked " Chris Wilson
2016-06-03 16:36 ` [PATCH 20/62] drm/i915: Disable waitboosting for fence_wait() Chris Wilson
2016-06-03 16:36 ` [PATCH 21/62] drm/i915: Disable waitboosting for mmioflips/semaphores Chris Wilson
2016-06-03 16:36 ` [PATCH 22/62] drm/i915: Treat ringbuffer writes as write to normal memory Chris Wilson
2016-06-03 16:36 ` [PATCH 23/62] drm/i915: Rename ring->virtual_start as ring->vaddr Chris Wilson
2016-06-03 16:36 ` [PATCH 24/62] drm/i915: Convert i915_semaphores_is_enabled over to early sanitize Chris Wilson
2016-06-03 16:36 ` [PATCH 25/62] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit Chris Wilson
2016-06-03 16:36 ` [PATCH 26/62] drm/i915: Rename request->ring to request->engine Chris Wilson
2016-06-06 13:42 ` Tvrtko Ursulin
2016-06-03 16:36 ` [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
2016-06-06 13:44 ` Tvrtko Ursulin
2016-06-08 9:18 ` Daniel Vetter
2016-06-03 16:36 ` [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs Chris Wilson
2016-06-06 13:45 ` Tvrtko Ursulin
2016-06-03 16:36 ` [PATCH 29/62] drm/i915: Rename intel_context[engine].ringbuf Chris Wilson
2016-06-03 16:36 ` [PATCH 30/62] drm/i915: Rename struct intel_ringbuffer to struct intel_ring Chris Wilson
2016-06-03 16:36 ` [PATCH 31/62] drm/i915: Rename residual ringbuf parameters Chris Wilson
2016-06-03 16:36 ` [PATCH 32/62] drm/i915: Rename intel_pin_and_map_ring() Chris Wilson
2016-06-03 16:36 ` [PATCH 33/62] drm/i915: Remove obsolete engine->gpu_caches_dirty Chris Wilson
2016-06-03 16:36 ` [PATCH 34/62] drm/i915: Simplify request_alloc by returning the allocated request Chris Wilson
2016-06-03 16:37 ` [PATCH 35/62] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START Chris Wilson
2016-06-03 16:37 ` [PATCH 36/62] drm/i915: Convert engine->write_tail to operate on a request Chris Wilson
2016-06-03 16:37 ` [PATCH 37/62] drm/i915: Unify request submission Chris Wilson
2016-06-03 16:37 ` [PATCH 38/62] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal() Chris Wilson
2016-06-03 16:37 ` [PATCH 39/62] drm/i915: Reuse legacy breadcrumbs + tail emission Chris Wilson
2016-06-03 16:37 ` [PATCH 40/62] drm/i915: Remove duplicate golden render state init from execlists Chris Wilson
2016-06-03 16:37 ` [PATCH 41/62] drm/i915: Unify legacy/execlists submit_execbuf callbacks Chris Wilson
2016-06-03 16:37 ` [PATCH 42/62] drm/i915: Simplify calling engine->sync_to Chris Wilson
2016-06-03 16:37 ` [PATCH 43/62] drm/i915: Introduce i915_gem_active for request tracking Chris Wilson
2016-06-03 16:37 ` [PATCH 44/62] drm/i915: Prepare i915_gem_active for annotations Chris Wilson
2016-06-03 16:37 ` [PATCH 45/62] drm/i915: Mark up i915_gem_active for locking annotation Chris Wilson
2016-06-03 16:37 ` [PATCH 46/62] drm/i915: Refactor blocking waits Chris Wilson
2016-06-03 16:37 ` [PATCH 47/62] drm/i915: Rename request->list to link for consistency Chris Wilson
2016-06-03 16:37 ` [PATCH 48/62] drm/i915: Remove obsolete i915_gem_object_flush_active() Chris Wilson
2016-06-03 16:37 ` [PATCH 49/62] drm/i915: Refactor activity tracking for requests Chris Wilson
2016-06-03 16:37 ` [PATCH 50/62] drm/i915: Double check activity before relocations Chris Wilson
2016-06-03 16:37 ` [PATCH 51/62] drm/i915: Move request list retirement to i915_gem_request.c Chris Wilson
2016-06-03 16:37 ` [PATCH 52/62] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
2016-06-03 16:37 ` [PATCH 53/62] drm/i915: Split early global GTT initialisation Chris Wilson
2016-06-03 16:37 ` [PATCH 54/62] drm/i915: Store owning file on the i915_address_space Chris Wilson
2016-06-03 16:37 ` [PATCH 55/62] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
2016-06-03 16:37 ` [PATCH 56/62] drm/i915: Count how many VMA are bound for an object Chris Wilson
2016-06-03 16:37 ` [PATCH 57/62] drm/i915: Be more careful when unbinding vma Chris Wilson
2016-06-03 16:37 ` [PATCH 58/62] drm/i915: Kill drop_pages() Chris Wilson
2016-06-03 16:37 ` [PATCH 59/62] drm/i915: Track active vma requests Chris Wilson
2016-06-03 16:37 ` [PATCH 60/62] drm/i915: Release vma when the handle is closed Chris Wilson
2016-06-03 16:37 ` [PATCH 61/62] drm/i915: Mark the context and address space as closed Chris Wilson
2016-06-03 16:37 ` [PATCH 62/62] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
2016-06-05 5:24 ` ✗ Ro.CI.BAT: failure for series starting with [01/62] drm/i915: Only start retire worker when idle Patchwork
2016-06-08 9:30 ` The vma leak fix from yonder Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465299067.5626.15.camel@linux.intel.com \
--to=joonas.lahtinen@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).