public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
Subject: Re: [PATCH 17/46] drm/i915: Apply rps waitboosting for dma_fence_wait_timeout()
Date: Mon, 11 Feb 2019 18:06:38 +0000	[thread overview]
Message-ID: <b4d217c3-9406-751e-0f1f-79b27e10ef35@linux.intel.com> (raw)
In-Reply-To: <20190206130356.18771-18-chris@chris-wilson.co.uk>


On 06/02/2019 13:03, Chris Wilson wrote:
> As time goes by, usage of generic ioctls such as drm_syncobj and
> sync_file are on the increase bypassing i915-specific ioctls like
> GEM_WAIT. Currently, we only apply waitboosting to our driver ioctls as
> we track the file/client and account the waitboosting to them. However,
> since commit 7b92c1bd0540 ("drm/i915: Avoid keeping waitboost active for
> signaling threads"), we no longer have been applying the client
> ratelimiting on waitboosts and so that information has only been used
> for debug tracking.
> 
> Push the application of waitboosting down to the common
> i915_request_wait, and apply it to all foreign fence waits as well.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Eero Tamminen <eero.t.tamminen@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_debugfs.c  | 19 +-----
>   drivers/gpu/drm/i915/i915_drv.h      |  7 +--
>   drivers/gpu/drm/i915/i915_gem.c      | 86 ++++++----------------------
>   drivers/gpu/drm/i915/i915_request.c  | 21 ++++++-
>   drivers/gpu/drm/i915/intel_display.c |  2 +-
>   drivers/gpu/drm/i915/intel_drv.h     |  2 +-
>   drivers/gpu/drm/i915/intel_pm.c      |  5 +-
>   7 files changed, 44 insertions(+), 98 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 8a488ffc8b7d..af53a2d07f6b 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -2020,11 +2020,9 @@ static const char *rps_power_to_str(unsigned int power)
>   static int i915_rps_boost_info(struct seq_file *m, void *data)
>   {
>   	struct drm_i915_private *dev_priv = node_to_i915(m->private);
> -	struct drm_device *dev = &dev_priv->drm;
>   	struct intel_rps *rps = &dev_priv->gt_pm.rps;
>   	u32 act_freq = rps->cur_freq;
>   	intel_wakeref_t wakeref;
> -	struct drm_file *file;
>   
>   	with_intel_runtime_pm_if_in_use(dev_priv, wakeref) {
>   		if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
> @@ -2058,22 +2056,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
>   		   intel_gpu_freq(dev_priv, rps->efficient_freq),
>   		   intel_gpu_freq(dev_priv, rps->boost_freq));
>   
> -	mutex_lock(&dev->filelist_mutex);
> -	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
> -		struct drm_i915_file_private *file_priv = file->driver_priv;
> -		struct task_struct *task;
> -
> -		rcu_read_lock();
> -		task = pid_task(file->pid, PIDTYPE_PID);
> -		seq_printf(m, "%s [%d]: %d boosts\n",
> -			   task ? task->comm : "<unknown>",
> -			   task ? task->pid : -1,
> -			   atomic_read(&file_priv->rps_client.boosts));
> -		rcu_read_unlock();
> -	}
> -	seq_printf(m, "Kernel (anonymous) boosts: %d\n",
> -		   atomic_read(&rps->boosts));
> -	mutex_unlock(&dev->filelist_mutex);
> +	seq_printf(m, "Wait boosts: %d\n", atomic_read(&rps->boosts));
>   
>   	if (INTEL_GEN(dev_priv) >= 6 &&
>   	    rps->enabled &&
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index a365b1a2ea9a..4d697b1002af 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -217,10 +217,6 @@ struct drm_i915_file_private {
>   	} mm;
>   	struct idr context_idr;
>   
> -	struct intel_rps_client {
> -		atomic_t boosts;
> -	} rps_client;
> -
>   	unsigned int bsd_engine;
>   
>   /*
> @@ -3041,8 +3037,7 @@ void i915_gem_resume(struct drm_i915_private *dev_priv);
>   vm_fault_t i915_gem_fault(struct vm_fault *vmf);
>   int i915_gem_object_wait(struct drm_i915_gem_object *obj,
>   			 unsigned int flags,
> -			 long timeout,
> -			 struct intel_rps_client *rps);
> +			 long timeout);
>   int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>   				  unsigned int flags,
>   				  const struct i915_sched_attr *attr);
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 04fa184fdff5..78b9aa57932d 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -421,8 +421,7 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj)
>   static long
>   i915_gem_object_wait_fence(struct dma_fence *fence,
>   			   unsigned int flags,
> -			   long timeout,
> -			   struct intel_rps_client *rps_client)
> +			   long timeout)
>   {
>   	struct i915_request *rq;
>   
> @@ -440,27 +439,6 @@ i915_gem_object_wait_fence(struct dma_fence *fence,
>   	if (i915_request_completed(rq))
>   		goto out;
>   
> -	/*
> -	 * This client is about to stall waiting for the GPU. In many cases
> -	 * this is undesirable and limits the throughput of the system, as
> -	 * many clients cannot continue processing user input/output whilst
> -	 * blocked. RPS autotuning may take tens of milliseconds to respond
> -	 * to the GPU load and thus incurs additional latency for the client.
> -	 * We can circumvent that by promoting the GPU frequency to maximum
> -	 * before we wait. This makes the GPU throttle up much more quickly
> -	 * (good for benchmarks and user experience, e.g. window animations),
> -	 * but at a cost of spending more power processing the workload
> -	 * (bad for battery). Not all clients even want their results
> -	 * immediately and for them we should just let the GPU select its own
> -	 * frequency to maximise efficiency. To prevent a single client from
> -	 * forcing the clocks too high for the whole system, we only allow
> -	 * each client to waitboost once in a busy period.
> -	 */
> -	if (rps_client && !i915_request_started(rq)) {
> -		if (INTEL_GEN(rq->i915) >= 6)
> -			gen6_rps_boost(rq, rps_client);
> -	}
> -
>   	timeout = i915_request_wait(rq, flags, timeout);
>   
>   out:
> @@ -473,8 +451,7 @@ i915_gem_object_wait_fence(struct dma_fence *fence,
>   static long
>   i915_gem_object_wait_reservation(struct reservation_object *resv,
>   				 unsigned int flags,
> -				 long timeout,
> -				 struct intel_rps_client *rps_client)
> +				 long timeout)
>   {
>   	unsigned int seq = __read_seqcount_begin(&resv->seq);
>   	struct dma_fence *excl;
> @@ -492,8 +469,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
>   
>   		for (i = 0; i < count; i++) {
>   			timeout = i915_gem_object_wait_fence(shared[i],
> -							     flags, timeout,
> -							     rps_client);
> +							     flags, timeout);
>   			if (timeout < 0)
>   				break;
>   
> @@ -519,8 +495,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
>   	}
>   
>   	if (excl && timeout >= 0)
> -		timeout = i915_gem_object_wait_fence(excl, flags, timeout,
> -						     rps_client);
> +		timeout = i915_gem_object_wait_fence(excl, flags, timeout);
>   
>   	dma_fence_put(excl);
>   
> @@ -614,30 +589,19 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
>    * @obj: i915 gem object
>    * @flags: how to wait (under a lock, for all rendering or just for writes etc)
>    * @timeout: how long to wait
> - * @rps_client: client (user process) to charge for any waitboosting
>    */
>   int
>   i915_gem_object_wait(struct drm_i915_gem_object *obj,
>   		     unsigned int flags,
> -		     long timeout,
> -		     struct intel_rps_client *rps_client)
> +		     long timeout)
>   {
>   	might_sleep();
>   	GEM_BUG_ON(timeout < 0);
>   
> -	timeout = i915_gem_object_wait_reservation(obj->resv,
> -						   flags, timeout,
> -						   rps_client);
> +	timeout = i915_gem_object_wait_reservation(obj->resv, flags, timeout);
>   	return timeout < 0 ? timeout : 0;
>   }
>   
> -static struct intel_rps_client *to_rps_client(struct drm_file *file)
> -{
> -	struct drm_i915_file_private *fpriv = file->driver_priv;
> -
> -	return &fpriv->rps_client;
> -}
> -
>   static int
>   i915_gem_phys_pwrite(struct drm_i915_gem_object *obj,
>   		     struct drm_i915_gem_pwrite *args,
> @@ -843,8 +807,7 @@ int i915_gem_obj_prepare_shmem_read(struct drm_i915_gem_object *obj,
>   	ret = i915_gem_object_wait(obj,
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_LOCKED,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		return ret;
>   
> @@ -896,8 +859,7 @@ int i915_gem_obj_prepare_shmem_write(struct drm_i915_gem_object *obj,
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_LOCKED |
>   				   I915_WAIT_ALL,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		return ret;
>   
> @@ -1159,8 +1121,7 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
>   
>   	ret = i915_gem_object_wait(obj,
>   				   I915_WAIT_INTERRUPTIBLE,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   to_rps_client(file));
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		goto out;
>   
> @@ -1459,8 +1420,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
>   	ret = i915_gem_object_wait(obj,
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_ALL,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   to_rps_client(file));
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		goto err;
>   
> @@ -1558,8 +1518,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_PRIORITY |
>   				   (write_domain ? I915_WAIT_ALL : 0),
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   to_rps_client(file));
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (err)
>   		goto out;
>   
> @@ -1850,8 +1809,7 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf)
>   	 */
>   	ret = i915_gem_object_wait(obj,
>   				   I915_WAIT_INTERRUPTIBLE,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		goto err;
>   
> @@ -3181,8 +3139,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_PRIORITY |
>   				   I915_WAIT_ALL,
> -				   to_wait_timeout(args->timeout_ns),
> -				   to_rps_client(file));
> +				   to_wait_timeout(args->timeout_ns));
>   
>   	if (args->timeout_ns > 0) {
>   		args->timeout_ns -= ktime_to_ns(ktime_sub(ktime_get(), start));
> @@ -3251,7 +3208,7 @@ wait_for_timelines(struct drm_i915_private *i915,
>   		 * stalls, so allow the gpu to boost to maximum clocks.
>   		 */
>   		if (flags & I915_WAIT_FOR_IDLE_BOOST)
> -			gen6_rps_boost(rq, NULL);
> +			gen6_rps_boost(rq);
>   
>   		timeout = i915_request_wait(rq, flags, timeout);
>   		i915_request_put(rq);
> @@ -3346,8 +3303,7 @@ i915_gem_object_set_to_wc_domain(struct drm_i915_gem_object *obj, bool write)
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_LOCKED |
>   				   (write ? I915_WAIT_ALL : 0),
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		return ret;
>   
> @@ -3409,8 +3365,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_LOCKED |
>   				   (write ? I915_WAIT_ALL : 0),
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		return ret;
>   
> @@ -3525,8 +3480,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
>   					   I915_WAIT_INTERRUPTIBLE |
>   					   I915_WAIT_LOCKED |
>   					   I915_WAIT_ALL,
> -					   MAX_SCHEDULE_TIMEOUT,
> -					   NULL);
> +					   MAX_SCHEDULE_TIMEOUT);
>   		if (ret)
>   			return ret;
>   
> @@ -3664,8 +3618,7 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
>   
>   	ret = i915_gem_object_wait(obj,
>   				   I915_WAIT_INTERRUPTIBLE,
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   to_rps_client(file));
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		goto out;
>   
> @@ -3791,8 +3744,7 @@ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write)
>   				   I915_WAIT_INTERRUPTIBLE |
>   				   I915_WAIT_LOCKED |
>   				   (write ? I915_WAIT_ALL : 0),
> -				   MAX_SCHEDULE_TIMEOUT,
> -				   NULL);
> +				   MAX_SCHEDULE_TIMEOUT);
>   	if (ret)
>   		return ret;
>   
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 678da705e222..eed66d3606d9 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -81,7 +81,9 @@ static signed long i915_fence_wait(struct dma_fence *fence,
>   				   bool interruptible,
>   				   signed long timeout)
>   {
> -	return i915_request_wait(to_request(fence), interruptible, timeout);
> +	return i915_request_wait(to_request(fence),
> +				 interruptible | I915_WAIT_PRIORITY,
> +				 timeout);
>   }
>   
>   static void i915_fence_release(struct dma_fence *fence)
> @@ -1288,8 +1290,23 @@ long i915_request_wait(struct i915_request *rq,
>   	if (__i915_spin_request(rq, state, 5))
>   		goto out;
>   
> -	if (flags & I915_WAIT_PRIORITY)
> +	/*
> +	 * This client is about to stall waiting for the GPU. In many cases
> +	 * this is undesirable and limits the throughput of the system, as
> +	 * many clients cannot continue processing user input/output whilst
> +	 * blocked. RPS autotuning may take tens of milliseconds to respond
> +	 * to the GPU load and thus incurs additional latency for the client.
> +	 * We can circumvent that by promoting the GPU frequency to maximum
> +	 * before we sleep. This makes the GPU throttle up much more quickly
> +	 * (good for benchmarks and user experience, e.g. window animations),
> +	 * but at a cost of spending more power processing the workload
> +	 * (bad for battery).
> +	 */
> +	if (flags & I915_WAIT_PRIORITY) {
> +		if (!i915_request_started(rq) && INTEL_GEN(rq->i915) >= 6)
> +			gen6_rps_boost(rq);
>   		i915_schedule_bump_priority(rq, I915_PRIORITY_WAIT);
> +	}
>   
>   	wait.tsk = current;
>   	if (dma_fence_add_callback(&rq->fence, &wait.cb, request_wait_wake))
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 4d5ec929f987..f8657e53fe68 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -13396,7 +13396,7 @@ static int do_rps_boost(struct wait_queue_entry *_wait,
>   	 * vblank without our intervention, so leave RPS alone.
>   	 */
>   	if (!i915_request_started(rq))
> -		gen6_rps_boost(rq, NULL);
> +		gen6_rps_boost(rq);
>   	i915_request_put(rq);
>   
>   	drm_crtc_vblank_put(wait->crtc);
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 90ba5436370e..47b4f88da7eb 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -2249,7 +2249,7 @@ void intel_suspend_gt_powersave(struct drm_i915_private *dev_priv);
>   void gen6_rps_busy(struct drm_i915_private *dev_priv);
>   void gen6_rps_reset_ei(struct drm_i915_private *dev_priv);
>   void gen6_rps_idle(struct drm_i915_private *dev_priv);
> -void gen6_rps_boost(struct i915_request *rq, struct intel_rps_client *rps);
> +void gen6_rps_boost(struct i915_request *rq);
>   void g4x_wm_get_hw_state(struct drm_i915_private *dev_priv);
>   void vlv_wm_get_hw_state(struct drm_i915_private *dev_priv);
>   void ilk_wm_get_hw_state(struct drm_i915_private *dev_priv);
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 737005bf6816..58514a17f134 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -6693,8 +6693,7 @@ void gen6_rps_idle(struct drm_i915_private *dev_priv)
>   	mutex_unlock(&dev_priv->pcu_lock);
>   }
>   
> -void gen6_rps_boost(struct i915_request *rq,
> -		    struct intel_rps_client *rps_client)
> +void gen6_rps_boost(struct i915_request *rq)
>   {
>   	struct intel_rps *rps = &rq->i915->gt_pm.rps;
>   	unsigned long flags;
> @@ -6723,7 +6722,7 @@ void gen6_rps_boost(struct i915_request *rq,
>   	if (READ_ONCE(rps->cur_freq) < rps->boost_freq)
>   		schedule_work(&rps->work);
>   
> -	atomic_inc(rps_client ? &rps_client->boosts : &rps->boosts);
> +	atomic_inc(&rps->boosts);
>   }
>   
>   int intel_set_rps(struct drm_i915_private *dev_priv, u8 val)
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2019-02-11 18:06 UTC|newest]

Thread overview: 97+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-06 13:03 The road to load balancing Chris Wilson
2019-02-06 13:03 ` [PATCH 01/46] drm/i915: Hack and slash, throttle execbuffer hogs Chris Wilson
2019-02-06 13:03 ` [PATCH 02/46] drm/i915: Revoke mmaps and prevent access to fence registers across reset Chris Wilson
2019-02-06 15:56   ` Mika Kuoppala
2019-02-06 16:08     ` Chris Wilson
2019-02-06 16:18       ` Chris Wilson
2019-02-26 19:53   ` Rodrigo Vivi
2019-02-26 20:27     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 03/46] drm/i915: Force the GPU reset upon wedging Chris Wilson
2019-02-06 13:03 ` [PATCH 04/46] drm/i915: Uninterruptibly drain the timelines on unwedging Chris Wilson
2019-02-06 13:03 ` [PATCH 05/46] drm/i915: Wait for old resets before applying debugfs/i915_wedged Chris Wilson
2019-02-06 13:03 ` [PATCH 06/46] drm/i915: Serialise resets with wedging Chris Wilson
2019-02-06 13:03 ` [PATCH 07/46] drm/i915: Don't claim an unstarted request was guilty Chris Wilson
2019-02-06 13:03 ` [PATCH 08/46] drm/i915/execlists: Suppress mere WAIT preemption Chris Wilson
2019-02-11 11:19   ` Tvrtko Ursulin
2019-02-19 10:22   ` Matthew Auld
2019-02-19 10:34     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 09/46] drm/i915/execlists: Suppress redundant preemption Chris Wilson
2019-02-06 13:03 ` [PATCH 10/46] drm/i915: Make request allocation caches global Chris Wilson
2019-02-11 11:43   ` Tvrtko Ursulin
2019-02-11 12:40     ` Chris Wilson
2019-02-11 17:02       ` Tvrtko Ursulin
2019-02-12 11:51         ` Chris Wilson
2019-02-06 13:03 ` [PATCH 11/46] drm/i915: Keep timeline HWSP allocated until idle across the system Chris Wilson
2019-02-06 13:03 ` [PATCH 12/46] drm/i915/execlists: Refactor out can_merge_rq() Chris Wilson
2019-02-06 13:03 ` [PATCH 13/46] drm/i915: Compute the global scheduler caps Chris Wilson
2019-02-11 12:24   ` Tvrtko Ursulin
2019-02-11 12:33     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 14/46] drm/i915: Use HW semaphores for inter-engine synchronisation on gen8+ Chris Wilson
2019-02-06 13:03 ` [PATCH 15/46] drm/i915: Prioritise non-busywait semaphore workloads Chris Wilson
2019-02-06 13:03 ` [PATCH 16/46] drm/i915: Show support for accurate sw PMU busyness tracking Chris Wilson
2019-02-06 13:03 ` [PATCH 17/46] drm/i915: Apply rps waitboosting for dma_fence_wait_timeout() Chris Wilson
2019-02-11 18:06   ` Tvrtko Ursulin [this message]
2019-02-06 13:03 ` [PATCH 18/46] drm/i915: Replace global_seqno with a hangcheck heartbeat seqno Chris Wilson
2019-02-11 12:40   ` Tvrtko Ursulin
2019-02-11 12:44     ` Chris Wilson
2019-02-11 16:56       ` Tvrtko Ursulin
2019-02-12 13:36         ` Chris Wilson
2019-02-06 13:03 ` [PATCH 19/46] drm/i915/pmu: Always sample an active ringbuffer Chris Wilson
2019-02-11 18:18   ` Tvrtko Ursulin
2019-02-12 13:40     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 20/46] drm/i915: Remove access to global seqno in the HWSP Chris Wilson
2019-02-11 18:22   ` Tvrtko Ursulin
2019-02-06 13:03 ` [PATCH 21/46] drm/i915: Remove i915_request.global_seqno Chris Wilson
2019-02-11 18:44   ` Tvrtko Ursulin
2019-02-12 13:45     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 22/46] drm/i915: Force GPU idle on suspend Chris Wilson
2019-02-06 13:03 ` [PATCH 23/46] drm/i915/selftests: Improve switch-to-kernel-context checking Chris Wilson
2019-02-06 13:03 ` [PATCH 24/46] drm/i915: Do a synchronous switch-to-kernel-context on idling Chris Wilson
2019-02-21 19:48   ` Daniele Ceraolo Spurio
2019-02-21 21:17     ` Chris Wilson
2019-02-21 21:31       ` Daniele Ceraolo Spurio
2019-02-21 21:42         ` Chris Wilson
2019-02-21 22:53           ` Daniele Ceraolo Spurio
2019-02-21 23:25             ` Chris Wilson
2019-02-22  0:29               ` Daniele Ceraolo Spurio
2019-02-06 13:03 ` [PATCH 25/46] drm/i915: Store the BIT(engine->id) as the engine's mask Chris Wilson
2019-02-11 18:51   ` Tvrtko Ursulin
2019-02-12 13:51     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 26/46] drm/i915: Refactor common code to load initial power context Chris Wilson
2019-02-06 13:03 ` [PATCH 27/46] drm/i915: Reduce presumption of request ordering for barriers Chris Wilson
2019-02-06 13:03 ` [PATCH 28/46] drm/i915: Remove has-kernel-context Chris Wilson
2019-02-06 13:03 ` [PATCH 29/46] drm/i915: Introduce the i915_user_extension_method Chris Wilson
2019-02-11 19:00   ` Tvrtko Ursulin
2019-02-12 13:56     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 30/46] drm/i915: Track active engines within a context Chris Wilson
2019-02-11 19:11   ` Tvrtko Ursulin
2019-02-12 13:59     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 31/46] drm/i915: Introduce a context barrier callback Chris Wilson
2019-02-06 13:03 ` [PATCH 32/46] drm/i915: Create/destroy VM (ppGTT) for use with contexts Chris Wilson
2019-02-12 11:18   ` Tvrtko Ursulin
2019-02-12 14:11     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 33/46] drm/i915: Extend CONTEXT_CREATE to set parameters upon construction Chris Wilson
2019-02-12 13:43   ` Tvrtko Ursulin
2019-02-06 13:03 ` [PATCH 34/46] drm/i915: Allow contexts to share a single timeline across all engines Chris Wilson
2019-02-06 13:03 ` [PATCH 35/46] drm/i915: Fix I915_EXEC_RING_MASK Chris Wilson
2019-02-06 13:03 ` [PATCH 36/46] drm/i915: Remove last traces of exec-id (GEM_BUSY) Chris Wilson
2019-02-06 13:03 ` [PATCH 37/46] drm/i915: Re-arrange execbuf so context is known before engine Chris Wilson
2019-02-06 13:03 ` [PATCH 38/46] drm/i915: Allow a context to define its set of engines Chris Wilson
2019-02-25 10:41   ` Tvrtko Ursulin
2019-02-25 10:47     ` Chris Wilson
2019-02-06 13:03 ` [PATCH 39/46] drm/i915: Extend I915_CONTEXT_PARAM_SSEU to support local ctx->engine[] Chris Wilson
2019-02-06 13:03 ` [PATCH 40/46] drm/i915: Pass around the intel_context Chris Wilson
2019-02-06 13:03 ` [PATCH 41/46] drm/i915: Split struct intel_context definition to its own header Chris Wilson
2019-02-06 13:03 ` [PATCH 42/46] drm/i915: Move over to intel_context_lookup() Chris Wilson
2019-02-06 14:27   ` [PATCH] " Chris Wilson
2019-02-06 13:03 ` [PATCH 43/46] drm/i915: Load balancing across a virtual engine Chris Wilson
2019-02-06 13:03 ` [PATCH 44/46] drm/i915: Extend execution fence to support a callback Chris Wilson
2019-02-06 13:03 ` [PATCH 45/46] drm/i915/execlists: Virtual engine bonding Chris Wilson
2019-02-06 13:03 ` [PATCH 46/46] drm/i915: Allow specification of parallel execbuf Chris Wilson
2019-02-06 13:52 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/46] drm/i915: Hack and slash, throttle execbuffer hogs Patchwork
2019-02-06 14:09 ` ✗ Fi.CI.BAT: failure " Patchwork
2019-02-06 14:11 ` ✗ Fi.CI.SPARSE: warning " Patchwork
2019-02-06 14:37 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/46] drm/i915: Hack and slash, throttle execbuffer hogs (rev2) Patchwork
2019-02-06 14:55 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-02-06 14:56 ` ✓ Fi.CI.BAT: success " Patchwork
2019-02-06 16:18 ` ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b4d217c3-9406-751e-0f1f-79b27e10ef35@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=eero.t.tamminen@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox