public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH] drm: Make drm_read() more robust against multithreaded races
       [not found] <s5hk327pd5w.wl-tiwai@suse.de>
@ 2014-12-04 21:03 ` Chris Wilson
  2014-12-05  2:19   ` shuang.he
                     ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Chris Wilson @ 2014-12-04 21:03 UTC (permalink / raw)
  To: intel-gfx; +Cc: Takashi Iwai, dri-devel

The current implementation of drm_read() faces a number of issues:

1. Upon an error, it consumes the event which may lead to the client
blocking.
2. Upon an error, it forgets about events already copied
3. If it fails to copy a single event with O_NONBLOCK it falls into a
infinite loop of reporting EAGAIN.
3. There is a race between multiple waiters and blocking reads of the
events list.

Here, we inline drm_dequeue_event() into drm_read() so that we can take
the spinlock around the list walking and event copying, and importantly
reorder the error handling to avoid the issues above.

Cc: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/drm_fops.c | 90 ++++++++++++++++++++++------------------------
 1 file changed, 43 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
index 91e1105f2800..076dd606b580 100644
--- a/drivers/gpu/drm/drm_fops.c
+++ b/drivers/gpu/drm/drm_fops.c
@@ -478,63 +478,59 @@ int drm_release(struct inode *inode, struct file *filp)
 }
 EXPORT_SYMBOL(drm_release);
 
-static bool
-drm_dequeue_event(struct drm_file *file_priv,
-		  size_t total, size_t max, struct drm_pending_event **out)
+ssize_t drm_read(struct file *filp, char __user *buffer,
+		 size_t count, loff_t *offset)
 {
+	struct drm_file *file_priv = filp->private_data;
 	struct drm_device *dev = file_priv->minor->dev;
-	struct drm_pending_event *e;
-	unsigned long flags;
-	bool ret = false;
-
-	spin_lock_irqsave(&dev->event_lock, flags);
+	ssize_t ret = 0;
 
-	*out = NULL;
-	if (list_empty(&file_priv->event_list))
-		goto out;
-	e = list_first_entry(&file_priv->event_list,
-			     struct drm_pending_event, link);
-	if (e->event->length + total > max)
-		goto out;
+	if (!access_ok(VERIFY_WRITE, buffer, count))
+		return -EFAULT;
 
-	file_priv->event_space += e->event->length;
-	list_del(&e->link);
-	*out = e;
-	ret = true;
+	spin_lock_irq(&dev->event_lock);
+	for (;;) {
+		if (list_empty(&file_priv->event_list)) {
+			if (ret)
+				break;
 
-out:
-	spin_unlock_irqrestore(&dev->event_lock, flags);
-	return ret;
-}
-
-ssize_t drm_read(struct file *filp, char __user *buffer,
-		 size_t count, loff_t *offset)
-{
-	struct drm_file *file_priv = filp->private_data;
-	struct drm_pending_event *e;
-	size_t total;
-	ssize_t ret;
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
 
-	if ((filp->f_flags & O_NONBLOCK) == 0) {
-		ret = wait_event_interruptible(file_priv->event_wait,
-					       !list_empty(&file_priv->event_list));
-		if (ret < 0)
-			return ret;
-	}
+			spin_unlock_irq(&dev->event_lock);
+			ret = wait_event_interruptible(file_priv->event_wait,
+						       !list_empty(&file_priv->event_list));
+			spin_lock_irq(&dev->event_lock);
+			if (ret < 0)
+				break;
+
+			ret = 0;
+		} else {
+			struct drm_pending_event *e;
+
+			e = list_first_entry(&file_priv->event_list,
+					     struct drm_pending_event, link);
+			if (e->event->length + ret > count)
+				break;
+
+			if (__copy_to_user_inatomic(buffer + ret,
+						    e->event, e->event->length)) {
+				if (ret == 0)
+					ret = -EFAULT;
+				break;
+			}
 
-	total = 0;
-	while (drm_dequeue_event(file_priv, total, count, &e)) {
-		if (copy_to_user(buffer + total,
-				 e->event, e->event->length)) {
-			total = -EFAULT;
-			break;
+			file_priv->event_space += e->event->length;
+			ret += e->event->length;
+			list_del(&e->link);
+			e->destroy(e);
 		}
-
-		total += e->event->length;
-		e->destroy(e);
 	}
+	spin_unlock_irq(&dev->event_lock);
 
-	return total ?: -EAGAIN;
+	return ret;
 }
 EXPORT_SYMBOL(drm_read);
 
-- 
2.1.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm: Make drm_read() more robust against multithreaded races
  2014-12-04 21:03 ` [PATCH] drm: Make drm_read() more robust against multithreaded races Chris Wilson
@ 2014-12-05  2:19   ` shuang.he
  2014-12-05 20:59     ` Daniel Vetter
  2014-12-05  8:42   ` Takashi Iwai
  2014-12-05  8:44   ` Daniel Vetter
  2 siblings, 1 reply; 6+ messages in thread
From: shuang.he @ 2014-12-05  2:19 UTC (permalink / raw)
  To: shuang.he, intel-gfx, chris

Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
-------------------------------------Summary-------------------------------------
Platform          Delta          drm-intel-nightly          Series Applied
PNV                                  364/364              364/364
ILK                                  366/366              366/366
SNB                                  450/450              450/450
IVB              +17                 481/498              498/498
BYT                                  289/289              289/289
HSW                                  564/564              564/564
BDW                                  417/417              417/417
-------------------------------------Detailed-------------------------------------
Platform  Test                                drm-intel-nightly          Series Applied
 IVB  igt_kms_3d      DMESG_WARN(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-128x128-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-128x128-random      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-128x128-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-256x256-offscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-256x256-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-256x256-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-64x64-offscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-64x64-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-64x64-random      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-64x64-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_cursor_crc_cursor-size-change      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_fence_pin_leak      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_mmio_vs_cs_flip_setcrtc_vs_cs_flip      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_mmio_vs_cs_flip_setplane_vs_cs_flip      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_rotation_crc_primary-rotation      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
 IVB  igt_kms_rotation_crc_sprite-rotation      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
Note: You need to pay more attention to line start with '*'
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm: Make drm_read() more robust against multithreaded races
  2014-12-04 21:03 ` [PATCH] drm: Make drm_read() more robust against multithreaded races Chris Wilson
  2014-12-05  2:19   ` shuang.he
@ 2014-12-05  8:42   ` Takashi Iwai
  2015-01-05 10:18     ` [Intel-gfx] " Daniel Vetter
  2014-12-05  8:44   ` Daniel Vetter
  2 siblings, 1 reply; 6+ messages in thread
From: Takashi Iwai @ 2014-12-05  8:42 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

At Thu,  4 Dec 2014 21:03:25 +0000,
Chris Wilson wrote:
> 
> The current implementation of drm_read() faces a number of issues:
> 
> 1. Upon an error, it consumes the event which may lead to the client
> blocking.
> 2. Upon an error, it forgets about events already copied
> 3. If it fails to copy a single event with O_NONBLOCK it falls into a
> infinite loop of reporting EAGAIN.
> 3. There is a race between multiple waiters and blocking reads of the
> events list.
> 
> Here, we inline drm_dequeue_event() into drm_read() so that we can take
> the spinlock around the list walking and event copying, and importantly
> reorder the error handling to avoid the issues above.
> 
> Cc: Takashi Iwai <tiwai@suse.de>

Reviewed-by: Takashi Iwai <tiwai@suse.de>


Takashi

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/drm_fops.c | 90 ++++++++++++++++++++++------------------------
>  1 file changed, 43 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
> index 91e1105f2800..076dd606b580 100644
> --- a/drivers/gpu/drm/drm_fops.c
> +++ b/drivers/gpu/drm/drm_fops.c
> @@ -478,63 +478,59 @@ int drm_release(struct inode *inode, struct file *filp)
>  }
>  EXPORT_SYMBOL(drm_release);
>  
> -static bool
> -drm_dequeue_event(struct drm_file *file_priv,
> -		  size_t total, size_t max, struct drm_pending_event **out)
> +ssize_t drm_read(struct file *filp, char __user *buffer,
> +		 size_t count, loff_t *offset)
>  {
> +	struct drm_file *file_priv = filp->private_data;
>  	struct drm_device *dev = file_priv->minor->dev;
> -	struct drm_pending_event *e;
> -	unsigned long flags;
> -	bool ret = false;
> -
> -	spin_lock_irqsave(&dev->event_lock, flags);
> +	ssize_t ret = 0;
>  
> -	*out = NULL;
> -	if (list_empty(&file_priv->event_list))
> -		goto out;
> -	e = list_first_entry(&file_priv->event_list,
> -			     struct drm_pending_event, link);
> -	if (e->event->length + total > max)
> -		goto out;
> +	if (!access_ok(VERIFY_WRITE, buffer, count))
> +		return -EFAULT;
>  
> -	file_priv->event_space += e->event->length;
> -	list_del(&e->link);
> -	*out = e;
> -	ret = true;
> +	spin_lock_irq(&dev->event_lock);
> +	for (;;) {
> +		if (list_empty(&file_priv->event_list)) {
> +			if (ret)
> +				break;
>  
> -out:
> -	spin_unlock_irqrestore(&dev->event_lock, flags);
> -	return ret;
> -}
> -
> -ssize_t drm_read(struct file *filp, char __user *buffer,
> -		 size_t count, loff_t *offset)
> -{
> -	struct drm_file *file_priv = filp->private_data;
> -	struct drm_pending_event *e;
> -	size_t total;
> -	ssize_t ret;
> +			if (filp->f_flags & O_NONBLOCK) {
> +				ret = -EAGAIN;
> +				break;
> +			}
>  
> -	if ((filp->f_flags & O_NONBLOCK) == 0) {
> -		ret = wait_event_interruptible(file_priv->event_wait,
> -					       !list_empty(&file_priv->event_list));
> -		if (ret < 0)
> -			return ret;
> -	}
> +			spin_unlock_irq(&dev->event_lock);
> +			ret = wait_event_interruptible(file_priv->event_wait,
> +						       !list_empty(&file_priv->event_list));
> +			spin_lock_irq(&dev->event_lock);
> +			if (ret < 0)
> +				break;
> +
> +			ret = 0;
> +		} else {
> +			struct drm_pending_event *e;
> +
> +			e = list_first_entry(&file_priv->event_list,
> +					     struct drm_pending_event, link);
> +			if (e->event->length + ret > count)
> +				break;
> +
> +			if (__copy_to_user_inatomic(buffer + ret,
> +						    e->event, e->event->length)) {
> +				if (ret == 0)
> +					ret = -EFAULT;
> +				break;
> +			}
>  
> -	total = 0;
> -	while (drm_dequeue_event(file_priv, total, count, &e)) {
> -		if (copy_to_user(buffer + total,
> -				 e->event, e->event->length)) {
> -			total = -EFAULT;
> -			break;
> +			file_priv->event_space += e->event->length;
> +			ret += e->event->length;
> +			list_del(&e->link);
> +			e->destroy(e);
>  		}
> -
> -		total += e->event->length;
> -		e->destroy(e);
>  	}
> +	spin_unlock_irq(&dev->event_lock);
>  
> -	return total ?: -EAGAIN;
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_read);
>  
> -- 
> 2.1.3
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Intel-gfx] [PATCH] drm: Make drm_read() more robust against multithreaded races
  2014-12-04 21:03 ` [PATCH] drm: Make drm_read() more robust against multithreaded races Chris Wilson
  2014-12-05  2:19   ` shuang.he
  2014-12-05  8:42   ` Takashi Iwai
@ 2014-12-05  8:44   ` Daniel Vetter
  2 siblings, 0 replies; 6+ messages in thread
From: Daniel Vetter @ 2014-12-05  8:44 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, dri-devel

On Thu, Dec 04, 2014 at 09:03:25PM +0000, Chris Wilson wrote:
> The current implementation of drm_read() faces a number of issues:
> 
> 1. Upon an error, it consumes the event which may lead to the client
> blocking.
> 2. Upon an error, it forgets about events already copied
> 3. If it fails to copy a single event with O_NONBLOCK it falls into a
> infinite loop of reporting EAGAIN.
> 3. There is a race between multiple waiters and blocking reads of the
> events list.
> 
> Here, we inline drm_dequeue_event() into drm_read() so that we can take
> the spinlock around the list walking and event copying, and importantly
> reorder the error handling to avoid the issues above.
> 
> Cc: Takashi Iwai <tiwai@suse.de>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Imo if you go through all the trouble of fixing the corner-cases then we
should also have a testcase to exercise them. Otherwise I expect this to
fall apart again. So a little igt would be great.
-Daniel

> ---
>  drivers/gpu/drm/drm_fops.c | 90 ++++++++++++++++++++++------------------------
>  1 file changed, 43 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
> index 91e1105f2800..076dd606b580 100644
> --- a/drivers/gpu/drm/drm_fops.c
> +++ b/drivers/gpu/drm/drm_fops.c
> @@ -478,63 +478,59 @@ int drm_release(struct inode *inode, struct file *filp)
>  }
>  EXPORT_SYMBOL(drm_release);
>  
> -static bool
> -drm_dequeue_event(struct drm_file *file_priv,
> -		  size_t total, size_t max, struct drm_pending_event **out)
> +ssize_t drm_read(struct file *filp, char __user *buffer,
> +		 size_t count, loff_t *offset)
>  {
> +	struct drm_file *file_priv = filp->private_data;
>  	struct drm_device *dev = file_priv->minor->dev;
> -	struct drm_pending_event *e;
> -	unsigned long flags;
> -	bool ret = false;
> -
> -	spin_lock_irqsave(&dev->event_lock, flags);
> +	ssize_t ret = 0;
>  
> -	*out = NULL;
> -	if (list_empty(&file_priv->event_list))
> -		goto out;
> -	e = list_first_entry(&file_priv->event_list,
> -			     struct drm_pending_event, link);
> -	if (e->event->length + total > max)
> -		goto out;
> +	if (!access_ok(VERIFY_WRITE, buffer, count))
> +		return -EFAULT;
>  
> -	file_priv->event_space += e->event->length;
> -	list_del(&e->link);
> -	*out = e;
> -	ret = true;
> +	spin_lock_irq(&dev->event_lock);
> +	for (;;) {
> +		if (list_empty(&file_priv->event_list)) {
> +			if (ret)
> +				break;
>  
> -out:
> -	spin_unlock_irqrestore(&dev->event_lock, flags);
> -	return ret;
> -}
> -
> -ssize_t drm_read(struct file *filp, char __user *buffer,
> -		 size_t count, loff_t *offset)
> -{
> -	struct drm_file *file_priv = filp->private_data;
> -	struct drm_pending_event *e;
> -	size_t total;
> -	ssize_t ret;
> +			if (filp->f_flags & O_NONBLOCK) {
> +				ret = -EAGAIN;
> +				break;
> +			}
>  
> -	if ((filp->f_flags & O_NONBLOCK) == 0) {
> -		ret = wait_event_interruptible(file_priv->event_wait,
> -					       !list_empty(&file_priv->event_list));
> -		if (ret < 0)
> -			return ret;
> -	}
> +			spin_unlock_irq(&dev->event_lock);
> +			ret = wait_event_interruptible(file_priv->event_wait,
> +						       !list_empty(&file_priv->event_list));
> +			spin_lock_irq(&dev->event_lock);
> +			if (ret < 0)
> +				break;
> +
> +			ret = 0;
> +		} else {
> +			struct drm_pending_event *e;
> +
> +			e = list_first_entry(&file_priv->event_list,
> +					     struct drm_pending_event, link);
> +			if (e->event->length + ret > count)
> +				break;
> +
> +			if (__copy_to_user_inatomic(buffer + ret,
> +						    e->event, e->event->length)) {
> +				if (ret == 0)
> +					ret = -EFAULT;
> +				break;
> +			}
>  
> -	total = 0;
> -	while (drm_dequeue_event(file_priv, total, count, &e)) {
> -		if (copy_to_user(buffer + total,
> -				 e->event, e->event->length)) {
> -			total = -EFAULT;
> -			break;
> +			file_priv->event_space += e->event->length;
> +			ret += e->event->length;
> +			list_del(&e->link);
> +			e->destroy(e);
>  		}
> -
> -		total += e->event->length;
> -		e->destroy(e);
>  	}
> +	spin_unlock_irq(&dev->event_lock);
>  
> -	return total ?: -EAGAIN;
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_read);
>  
> -- 
> 2.1.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm: Make drm_read() more robust against multithreaded races
  2014-12-05  2:19   ` shuang.he
@ 2014-12-05 20:59     ` Daniel Vetter
  0 siblings, 0 replies; 6+ messages in thread
From: Daniel Vetter @ 2014-12-05 20:59 UTC (permalink / raw)
  To: shuang.he; +Cc: intel-gfx

On Thu, Dec 04, 2014 at 06:19:54PM -0800, shuang.he@intel.com wrote:
> Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
> -------------------------------------Summary-------------------------------------
> Platform          Delta          drm-intel-nightly          Series Applied
> PNV                                  364/364              364/364
> ILK                                  366/366              366/366
> SNB                                  450/450              450/450
> IVB              +17                 481/498              498/498

Today there have been a lot of prts test results with +17 for ivb. Is
something wrong with the baseline results that there's always the same
regressions/improvements?

In general there's an awful lot of noise in prts results, and off-by-one
in comparing results would explain a lot ...
-Daniel

> BYT                                  289/289              289/289
> HSW                                  564/564              564/564
> BDW                                  417/417              417/417
> -------------------------------------Detailed-------------------------------------
> Platform  Test                                drm-intel-nightly          Series Applied
>  IVB  igt_kms_3d      DMESG_WARN(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-128x128-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-128x128-random      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-128x128-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-256x256-offscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-256x256-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-256x256-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-64x64-offscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-64x64-onscreen      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-64x64-random      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-64x64-sliding      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_cursor_crc_cursor-size-change      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_fence_pin_leak      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_mmio_vs_cs_flip_setcrtc_vs_cs_flip      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_mmio_vs_cs_flip_setplane_vs_cs_flip      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_rotation_crc_primary-rotation      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
>  IVB  igt_kms_rotation_crc_sprite-rotation      NSPT(1, M34)PASS(10, M4M34M21)      PASS(1, M34)
> Note: You need to pay more attention to line start with '*'
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Intel-gfx] [PATCH] drm: Make drm_read() more robust against multithreaded races
  2014-12-05  8:42   ` Takashi Iwai
@ 2015-01-05 10:18     ` Daniel Vetter
  0 siblings, 0 replies; 6+ messages in thread
From: Daniel Vetter @ 2015-01-05 10:18 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: intel-gfx, dri-devel

On Fri, Dec 05, 2014 at 09:42:35AM +0100, Takashi Iwai wrote:
> At Thu,  4 Dec 2014 21:03:25 +0000,
> Chris Wilson wrote:
> > 
> > The current implementation of drm_read() faces a number of issues:
> > 
> > 1. Upon an error, it consumes the event which may lead to the client
> > blocking.
> > 2. Upon an error, it forgets about events already copied
> > 3. If it fails to copy a single event with O_NONBLOCK it falls into a
> > infinite loop of reporting EAGAIN.
> > 3. There is a race between multiple waiters and blocking reads of the
> > events list.
> > 
> > Here, we inline drm_dequeue_event() into drm_read() so that we can take
> > the spinlock around the list walking and event copying, and importantly
> > reorder the error handling to avoid the issues above.
> > 
> > Cc: Takashi Iwai <tiwai@suse.de>
> 
> Reviewed-by: Takashi Iwai <tiwai@suse.de>

Merged to drm-misc with the tag for the drm_read testcase added. Thanks
for the patch&reivew.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-01-05 10:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <s5hk327pd5w.wl-tiwai@suse.de>
2014-12-04 21:03 ` [PATCH] drm: Make drm_read() more robust against multithreaded races Chris Wilson
2014-12-05  2:19   ` shuang.he
2014-12-05 20:59     ` Daniel Vetter
2014-12-05  8:42   ` Takashi Iwai
2015-01-05 10:18     ` [Intel-gfx] " Daniel Vetter
2014-12-05  8:44   ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox