* [PATCH] drm: use seqlock for vblank time/count
@ 2016-05-10 14:21 Matthew Auld
2016-05-10 14:55 ` ✗ Ro.CI.BAT: failure for " Patchwork
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Matthew Auld @ 2016-05-10 14:21 UTC (permalink / raw)
To: dri-devel; +Cc: Daniel Vetter, intel-gfx
This patch aims to replace the roll-your-own seqlock implementation with
full-blown seqlock'. We also remove the timestamp ring-buffer in favour
of single timestamp/count pair protected by a seqlock. In turn this
means we can now increment the vblank freely without the need for
clamping.
v2:
- reduce the scope of the seqlock, keeping vblank_time_lock
- make the seqlock per vblank_crtc, so multiple readers aren't blocked by
the writer
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/drm_irq.c | 90 +++++++----------------------------------------
include/drm/drmP.h | 14 +++-----
2 files changed, 17 insertions(+), 87 deletions(-)
diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 3c1a6f1..0e95100 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -42,10 +42,6 @@
#include <linux/vgaarb.h>
#include <linux/export.h>
-/* Access macro for slots in vblank timestamp ringbuffer. */
-#define vblanktimestamp(dev, pipe, count) \
- ((dev)->vblank[pipe].time[(count) % DRM_VBLANKTIME_RBSIZE])
-
/* Retry timestamp calculation up to 3 times to satisfy
* drm_timestamp_precision before giving up.
*/
@@ -82,29 +78,15 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
struct timeval *t_vblank, u32 last)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
- u32 tslot;
assert_spin_locked(&dev->vblank_time_lock);
vblank->last = last;
- /* All writers hold the spinlock, but readers are serialized by
- * the latching of vblank->count below.
- */
- tslot = vblank->count + vblank_count_inc;
- vblanktimestamp(dev, pipe, tslot) = *t_vblank;
-
- /*
- * vblank timestamp updates are protected on the write side with
- * vblank_time_lock, but on the read side done locklessly using a
- * sequence-lock on the vblank counter. Ensure correct ordering using
- * memory barrriers. We need the barrier both before and also after the
- * counter update to synchronize with the next timestamp write.
- * The read-side barriers for this are in drm_vblank_count_and_time.
- */
- smp_wmb();
+ write_seqlock(&vblank->seqlock);
+ vblank->time = *t_vblank;
vblank->count += vblank_count_inc;
- smp_wmb();
+ write_sequnlock(&vblank->seqlock);
}
/**
@@ -205,7 +187,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
const struct timeval *t_old;
u64 diff_ns;
- t_old = &vblanktimestamp(dev, pipe, vblank->count);
+ t_old = &vblank->time;
diff_ns = timeval_to_ns(&t_vblank) - timeval_to_ns(t_old);
/*
@@ -239,49 +221,6 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
diff = 1;
}
- /*
- * FIMXE: Need to replace this hack with proper seqlocks.
- *
- * Restrict the bump of the software vblank counter to a safe maximum
- * value of +1 whenever there is the possibility that concurrent readers
- * of vblank timestamps could be active at the moment, as the current
- * implementation of the timestamp caching and updating is not safe
- * against concurrent readers for calls to store_vblank() with a bump
- * of anything but +1. A bump != 1 would very likely return corrupted
- * timestamps to userspace, because the same slot in the cache could
- * be concurrently written by store_vblank() and read by one of those
- * readers without the read-retry logic detecting the collision.
- *
- * Concurrent readers can exist when we are called from the
- * drm_vblank_off() or drm_vblank_on() functions and other non-vblank-
- * irq callers. However, all those calls to us are happening with the
- * vbl_lock locked to prevent drm_vblank_get(), so the vblank refcount
- * can't increase while we are executing. Therefore a zero refcount at
- * this point is safe for arbitrary counter bumps if we are called
- * outside vblank irq, a non-zero count is not 100% safe. Unfortunately
- * we must also accept a refcount of 1, as whenever we are called from
- * drm_vblank_get() -> drm_vblank_enable() the refcount will be 1 and
- * we must let that one pass through in order to not lose vblank counts
- * during vblank irq off - which would completely defeat the whole
- * point of this routine.
- *
- * Whenever we are called from vblank irq, we have to assume concurrent
- * readers exist or can show up any time during our execution, even if
- * the refcount is currently zero, as vblank irqs are usually only
- * enabled due to the presence of readers, and because when we are called
- * from vblank irq we can't hold the vbl_lock to protect us from sudden
- * bumps in vblank refcount. Therefore also restrict bumps to +1 when
- * called from vblank irq.
- */
- if ((diff > 1) && (atomic_read(&vblank->refcount) > 1 ||
- (flags & DRM_CALLED_FROM_VBLIRQ))) {
- DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u "
- "refcount %u, vblirq %u\n", pipe, diff,
- atomic_read(&vblank->refcount),
- (flags & DRM_CALLED_FROM_VBLIRQ) != 0);
- diff = 1;
- }
-
DRM_DEBUG_VBL("updating vblank count on crtc %u:"
" current=%u, diff=%u, hw=%u hw_last=%u\n",
pipe, vblank->count, diff, cur_vblank, vblank->last);
@@ -420,6 +359,7 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
init_waitqueue_head(&vblank->queue);
setup_timer(&vblank->disable_timer, vblank_disable_fn,
(unsigned long)vblank);
+ seqlock_init(&vblank->seqlock);
}
DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
@@ -991,25 +931,19 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
struct timeval *vblanktime)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
- int count = DRM_TIMESTAMP_MAXRETRIES;
- u32 cur_vblank;
+ u32 vblank_count;
+ unsigned int seq;
if (WARN_ON(pipe >= dev->num_crtcs))
return 0;
- /*
- * Vblank timestamps are read lockless. To ensure consistency the vblank
- * counter is rechecked and ordering is ensured using memory barriers.
- * This works like a seqlock. The write-side barriers are in store_vblank.
- */
do {
- cur_vblank = vblank->count;
- smp_rmb();
- *vblanktime = vblanktimestamp(dev, pipe, cur_vblank);
- smp_rmb();
- } while (cur_vblank != vblank->count && --count > 0);
+ seq = read_seqbegin(&vblank->seqlock);
+ vblank_count = vblank->count;
+ *vblanktime = vblank->time;
+ } while (read_seqretry(&vblank->seqlock, seq));
- return cur_vblank;
+ return vblank_count;
}
EXPORT_SYMBOL(drm_vblank_count_and_time);
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 360b2a7..9f33090 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -52,6 +52,7 @@
#include <linux/poll.h>
#include <linux/ratelimit.h>
#include <linux/sched.h>
+#include <linux/seqlock.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/vmalloc.h>
@@ -392,11 +393,6 @@ struct drm_master {
void *driver_priv;
};
-/* Size of ringbuffer for vblank timestamps. Just double-buffer
- * in initial implementation.
- */
-#define DRM_VBLANKTIME_RBSIZE 2
-
/* Flags and return codes for get_vblank_timestamp() driver function. */
#define DRM_CALLED_FROM_VBLIRQ 1
#define DRM_VBLANKTIME_SCANOUTPOS_METHOD (1 << 0)
@@ -725,10 +721,10 @@ struct drm_vblank_crtc {
wait_queue_head_t queue; /**< VBLANK wait queue */
struct timer_list disable_timer; /* delayed disable timer */
- /* vblank counter, protected by dev->vblank_time_lock for writes */
- u32 count;
- /* vblank timestamps, protected by dev->vblank_time_lock for writes */
- struct timeval time[DRM_VBLANKTIME_RBSIZE];
+ seqlock_t seqlock; /* protects vblank count and time */
+
+ u32 count; /* vblank counter */
+ struct timeval time; /* vblank timestamp */
atomic_t refcount; /* number of users of vblank interruptsper crtc */
u32 last; /* protected by dev->vbl_lock, used */
--
2.4.11
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 5+ messages in thread
* ✗ Ro.CI.BAT: failure for drm: use seqlock for vblank time/count
2016-05-10 14:21 [PATCH] drm: use seqlock for vblank time/count Matthew Auld
@ 2016-05-10 14:55 ` Patchwork
2016-05-11 9:48 ` [PATCH] " Matthew Auld
2016-05-24 12:20 ` Ville Syrjälä
2 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2016-05-10 14:55 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-gfx
== Series Details ==
Series: drm: use seqlock for vblank time/count
URL : https://patchwork.freedesktop.org/series/6981/
State : failure
== Summary ==
Series 6981v1 drm: use seqlock for vblank time/count
http://patchwork.freedesktop.org/api/1.0/series/6981/revisions/1/mbox
Test gem_exec_flush:
Subgroup basic-batch-kernel-default-cmd:
fail -> PASS (ro-byt-n2820)
Test kms_flip:
Subgroup basic-plain-flip:
pass -> INCOMPLETE (fi-byt-n2820)
pass -> INCOMPLETE (fi-skl-i7-6700k)
Test kms_sink_crc_basic:
pass -> SKIP (ro-skl-i7-6700hq)
fi-bdw-i7-5557u total:219 pass:206 dwarn:0 dfail:0 fail:0 skip:13
fi-byt-n2820 total:181 pass:144 dwarn:0 dfail:0 fail:3 skip:33
fi-hsw-i7-4770k total:219 pass:197 dwarn:0 dfail:0 fail:1 skip:21
fi-kbl-y total:219 pass:191 dwarn:1 dfail:0 fail:2 skip:25
fi-skl-i5-6260u total:219 pass:207 dwarn:0 dfail:0 fail:0 skip:12
fi-skl-i7-6700k total:181 pass:154 dwarn:0 dfail:0 fail:0 skip:26
fi-snb-i7-2600 total:37 pass:27 dwarn:0 dfail:0 fail:0 skip:9
ro-bdw-i7-5600u total:219 pass:187 dwarn:0 dfail:0 fail:0 skip:32
ro-bsw-n3050 total:219 pass:175 dwarn:0 dfail:0 fail:2 skip:42
ro-byt-n2820 total:218 pass:175 dwarn:0 dfail:0 fail:2 skip:41
ro-hsw-i3-4010u total:218 pass:193 dwarn:0 dfail:0 fail:0 skip:25
ro-hsw-i7-4770r total:219 pass:194 dwarn:0 dfail:0 fail:0 skip:25
ro-ilk-i7-620lm total:219 pass:151 dwarn:0 dfail:0 fail:1 skip:67
ro-ilk1-i5-650 total:214 pass:152 dwarn:0 dfail:0 fail:1 skip:61
ro-ivb-i7-3770 total:219 pass:183 dwarn:0 dfail:0 fail:0 skip:36
ro-ivb2-i7-3770 total:219 pass:187 dwarn:0 dfail:0 fail:0 skip:32
ro-skl-i7-6700hq total:214 pass:189 dwarn:0 dfail:0 fail:0 skip:25
ro-snb-i7-2620M total:219 pass:177 dwarn:0 dfail:0 fail:1 skip:41
ro-bdw-i7-5557U failed to connect after reboot
Results at /archive/results/CI_IGT_test/RO_Patchwork_850/
2d4abf3 drm-intel-nightly: 2016y-05m-10d-09h-36m-54s UTC integration manifest
6487344 drm: use seqlock for vblank time/count
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] drm: use seqlock for vblank time/count
2016-05-10 14:21 [PATCH] drm: use seqlock for vblank time/count Matthew Auld
2016-05-10 14:55 ` ✗ Ro.CI.BAT: failure for " Patchwork
@ 2016-05-11 9:48 ` Matthew Auld
2016-05-24 12:20 ` Ville Syrjälä
2 siblings, 0 replies; 5+ messages in thread
From: Matthew Auld @ 2016-05-11 9:48 UTC (permalink / raw)
To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Mario Kleiner
This patch aims to replace the roll-your-own seqlock implementation with
full-blown seqlock'. We also remove the timestamp ring-buffer in favour
of single timestamp/count pair protected by a seqlock. In turn this
means we can now increment the vblank freely without the need for
clamping.
v2:
- reduce the scope of the seqlock, keeping vblank_time_lock
- make the seqlock per vblank_crtc, so multiple readers aren't blocked by
the writer
Cc: Mario Kleiner <mario.kleiner@tuebingen.mpg.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/drm_irq.c | 90 +++++++----------------------------------------
include/drm/drmP.h | 14 +++-----
2 files changed, 17 insertions(+), 87 deletions(-)
diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 3c1a6f1..0e95100 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -42,10 +42,6 @@
#include <linux/vgaarb.h>
#include <linux/export.h>
-/* Access macro for slots in vblank timestamp ringbuffer. */
-#define vblanktimestamp(dev, pipe, count) \
- ((dev)->vblank[pipe].time[(count) % DRM_VBLANKTIME_RBSIZE])
-
/* Retry timestamp calculation up to 3 times to satisfy
* drm_timestamp_precision before giving up.
*/
@@ -82,29 +78,15 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
struct timeval *t_vblank, u32 last)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
- u32 tslot;
assert_spin_locked(&dev->vblank_time_lock);
vblank->last = last;
- /* All writers hold the spinlock, but readers are serialized by
- * the latching of vblank->count below.
- */
- tslot = vblank->count + vblank_count_inc;
- vblanktimestamp(dev, pipe, tslot) = *t_vblank;
-
- /*
- * vblank timestamp updates are protected on the write side with
- * vblank_time_lock, but on the read side done locklessly using a
- * sequence-lock on the vblank counter. Ensure correct ordering using
- * memory barrriers. We need the barrier both before and also after the
- * counter update to synchronize with the next timestamp write.
- * The read-side barriers for this are in drm_vblank_count_and_time.
- */
- smp_wmb();
+ write_seqlock(&vblank->seqlock);
+ vblank->time = *t_vblank;
vblank->count += vblank_count_inc;
- smp_wmb();
+ write_sequnlock(&vblank->seqlock);
}
/**
@@ -205,7 +187,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
const struct timeval *t_old;
u64 diff_ns;
- t_old = &vblanktimestamp(dev, pipe, vblank->count);
+ t_old = &vblank->time;
diff_ns = timeval_to_ns(&t_vblank) - timeval_to_ns(t_old);
/*
@@ -239,49 +221,6 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
diff = 1;
}
- /*
- * FIMXE: Need to replace this hack with proper seqlocks.
- *
- * Restrict the bump of the software vblank counter to a safe maximum
- * value of +1 whenever there is the possibility that concurrent readers
- * of vblank timestamps could be active at the moment, as the current
- * implementation of the timestamp caching and updating is not safe
- * against concurrent readers for calls to store_vblank() with a bump
- * of anything but +1. A bump != 1 would very likely return corrupted
- * timestamps to userspace, because the same slot in the cache could
- * be concurrently written by store_vblank() and read by one of those
- * readers without the read-retry logic detecting the collision.
- *
- * Concurrent readers can exist when we are called from the
- * drm_vblank_off() or drm_vblank_on() functions and other non-vblank-
- * irq callers. However, all those calls to us are happening with the
- * vbl_lock locked to prevent drm_vblank_get(), so the vblank refcount
- * can't increase while we are executing. Therefore a zero refcount at
- * this point is safe for arbitrary counter bumps if we are called
- * outside vblank irq, a non-zero count is not 100% safe. Unfortunately
- * we must also accept a refcount of 1, as whenever we are called from
- * drm_vblank_get() -> drm_vblank_enable() the refcount will be 1 and
- * we must let that one pass through in order to not lose vblank counts
- * during vblank irq off - which would completely defeat the whole
- * point of this routine.
- *
- * Whenever we are called from vblank irq, we have to assume concurrent
- * readers exist or can show up any time during our execution, even if
- * the refcount is currently zero, as vblank irqs are usually only
- * enabled due to the presence of readers, and because when we are called
- * from vblank irq we can't hold the vbl_lock to protect us from sudden
- * bumps in vblank refcount. Therefore also restrict bumps to +1 when
- * called from vblank irq.
- */
- if ((diff > 1) && (atomic_read(&vblank->refcount) > 1 ||
- (flags & DRM_CALLED_FROM_VBLIRQ))) {
- DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u "
- "refcount %u, vblirq %u\n", pipe, diff,
- atomic_read(&vblank->refcount),
- (flags & DRM_CALLED_FROM_VBLIRQ) != 0);
- diff = 1;
- }
-
DRM_DEBUG_VBL("updating vblank count on crtc %u:"
" current=%u, diff=%u, hw=%u hw_last=%u\n",
pipe, vblank->count, diff, cur_vblank, vblank->last);
@@ -420,6 +359,7 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
init_waitqueue_head(&vblank->queue);
setup_timer(&vblank->disable_timer, vblank_disable_fn,
(unsigned long)vblank);
+ seqlock_init(&vblank->seqlock);
}
DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
@@ -991,25 +931,19 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
struct timeval *vblanktime)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
- int count = DRM_TIMESTAMP_MAXRETRIES;
- u32 cur_vblank;
+ u32 vblank_count;
+ unsigned int seq;
if (WARN_ON(pipe >= dev->num_crtcs))
return 0;
- /*
- * Vblank timestamps are read lockless. To ensure consistency the vblank
- * counter is rechecked and ordering is ensured using memory barriers.
- * This works like a seqlock. The write-side barriers are in store_vblank.
- */
do {
- cur_vblank = vblank->count;
- smp_rmb();
- *vblanktime = vblanktimestamp(dev, pipe, cur_vblank);
- smp_rmb();
- } while (cur_vblank != vblank->count && --count > 0);
+ seq = read_seqbegin(&vblank->seqlock);
+ vblank_count = vblank->count;
+ *vblanktime = vblank->time;
+ } while (read_seqretry(&vblank->seqlock, seq));
- return cur_vblank;
+ return vblank_count;
}
EXPORT_SYMBOL(drm_vblank_count_and_time);
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 360b2a7..9f33090 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -52,6 +52,7 @@
#include <linux/poll.h>
#include <linux/ratelimit.h>
#include <linux/sched.h>
+#include <linux/seqlock.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/vmalloc.h>
@@ -392,11 +393,6 @@ struct drm_master {
void *driver_priv;
};
-/* Size of ringbuffer for vblank timestamps. Just double-buffer
- * in initial implementation.
- */
-#define DRM_VBLANKTIME_RBSIZE 2
-
/* Flags and return codes for get_vblank_timestamp() driver function. */
#define DRM_CALLED_FROM_VBLIRQ 1
#define DRM_VBLANKTIME_SCANOUTPOS_METHOD (1 << 0)
@@ -725,10 +721,10 @@ struct drm_vblank_crtc {
wait_queue_head_t queue; /**< VBLANK wait queue */
struct timer_list disable_timer; /* delayed disable timer */
- /* vblank counter, protected by dev->vblank_time_lock for writes */
- u32 count;
- /* vblank timestamps, protected by dev->vblank_time_lock for writes */
- struct timeval time[DRM_VBLANKTIME_RBSIZE];
+ seqlock_t seqlock; /* protects vblank count and time */
+
+ u32 count; /* vblank counter */
+ struct timeval time; /* vblank timestamp */
atomic_t refcount; /* number of users of vblank interruptsper crtc */
u32 last; /* protected by dev->vbl_lock, used */
--
2.4.11
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] drm: use seqlock for vblank time/count
2016-05-10 14:21 [PATCH] drm: use seqlock for vblank time/count Matthew Auld
2016-05-10 14:55 ` ✗ Ro.CI.BAT: failure for " Patchwork
2016-05-11 9:48 ` [PATCH] " Matthew Auld
@ 2016-05-24 12:20 ` Ville Syrjälä
2016-05-24 14:54 ` Daniel Vetter
2 siblings, 1 reply; 5+ messages in thread
From: Ville Syrjälä @ 2016-05-24 12:20 UTC (permalink / raw)
To: Matthew Auld; +Cc: Daniel Vetter, intel-gfx, dri-devel
On Tue, May 10, 2016 at 03:21:28PM +0100, Matthew Auld wrote:
> This patch aims to replace the roll-your-own seqlock implementation with
> full-blown seqlock'. We also remove the timestamp ring-buffer in favour
> of single timestamp/count pair protected by a seqlock. In turn this
> means we can now increment the vblank freely without the need for
> clamping.
>
> v2:
> - reduce the scope of the seqlock, keeping vblank_time_lock
> - make the seqlock per vblank_crtc, so multiple readers aren't blocked by
> the writer
>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
LGTM
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
> drivers/gpu/drm/drm_irq.c | 90 +++++++----------------------------------------
> include/drm/drmP.h | 14 +++-----
> 2 files changed, 17 insertions(+), 87 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> index 3c1a6f1..0e95100 100644
> --- a/drivers/gpu/drm/drm_irq.c
> +++ b/drivers/gpu/drm/drm_irq.c
> @@ -42,10 +42,6 @@
> #include <linux/vgaarb.h>
> #include <linux/export.h>
>
> -/* Access macro for slots in vblank timestamp ringbuffer. */
> -#define vblanktimestamp(dev, pipe, count) \
> - ((dev)->vblank[pipe].time[(count) % DRM_VBLANKTIME_RBSIZE])
> -
> /* Retry timestamp calculation up to 3 times to satisfy
> * drm_timestamp_precision before giving up.
> */
> @@ -82,29 +78,15 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> struct timeval *t_vblank, u32 last)
> {
> struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
> - u32 tslot;
>
> assert_spin_locked(&dev->vblank_time_lock);
>
> vblank->last = last;
>
> - /* All writers hold the spinlock, but readers are serialized by
> - * the latching of vblank->count below.
> - */
> - tslot = vblank->count + vblank_count_inc;
> - vblanktimestamp(dev, pipe, tslot) = *t_vblank;
> -
> - /*
> - * vblank timestamp updates are protected on the write side with
> - * vblank_time_lock, but on the read side done locklessly using a
> - * sequence-lock on the vblank counter. Ensure correct ordering using
> - * memory barrriers. We need the barrier both before and also after the
> - * counter update to synchronize with the next timestamp write.
> - * The read-side barriers for this are in drm_vblank_count_and_time.
> - */
> - smp_wmb();
> + write_seqlock(&vblank->seqlock);
> + vblank->time = *t_vblank;
> vblank->count += vblank_count_inc;
> - smp_wmb();
> + write_sequnlock(&vblank->seqlock);
> }
>
> /**
> @@ -205,7 +187,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
> const struct timeval *t_old;
> u64 diff_ns;
>
> - t_old = &vblanktimestamp(dev, pipe, vblank->count);
> + t_old = &vblank->time;
> diff_ns = timeval_to_ns(&t_vblank) - timeval_to_ns(t_old);
>
> /*
> @@ -239,49 +221,6 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
> diff = 1;
> }
>
> - /*
> - * FIMXE: Need to replace this hack with proper seqlocks.
> - *
> - * Restrict the bump of the software vblank counter to a safe maximum
> - * value of +1 whenever there is the possibility that concurrent readers
> - * of vblank timestamps could be active at the moment, as the current
> - * implementation of the timestamp caching and updating is not safe
> - * against concurrent readers for calls to store_vblank() with a bump
> - * of anything but +1. A bump != 1 would very likely return corrupted
> - * timestamps to userspace, because the same slot in the cache could
> - * be concurrently written by store_vblank() and read by one of those
> - * readers without the read-retry logic detecting the collision.
> - *
> - * Concurrent readers can exist when we are called from the
> - * drm_vblank_off() or drm_vblank_on() functions and other non-vblank-
> - * irq callers. However, all those calls to us are happening with the
> - * vbl_lock locked to prevent drm_vblank_get(), so the vblank refcount
> - * can't increase while we are executing. Therefore a zero refcount at
> - * this point is safe for arbitrary counter bumps if we are called
> - * outside vblank irq, a non-zero count is not 100% safe. Unfortunately
> - * we must also accept a refcount of 1, as whenever we are called from
> - * drm_vblank_get() -> drm_vblank_enable() the refcount will be 1 and
> - * we must let that one pass through in order to not lose vblank counts
> - * during vblank irq off - which would completely defeat the whole
> - * point of this routine.
> - *
> - * Whenever we are called from vblank irq, we have to assume concurrent
> - * readers exist or can show up any time during our execution, even if
> - * the refcount is currently zero, as vblank irqs are usually only
> - * enabled due to the presence of readers, and because when we are called
> - * from vblank irq we can't hold the vbl_lock to protect us from sudden
> - * bumps in vblank refcount. Therefore also restrict bumps to +1 when
> - * called from vblank irq.
> - */
> - if ((diff > 1) && (atomic_read(&vblank->refcount) > 1 ||
> - (flags & DRM_CALLED_FROM_VBLIRQ))) {
> - DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u "
> - "refcount %u, vblirq %u\n", pipe, diff,
> - atomic_read(&vblank->refcount),
> - (flags & DRM_CALLED_FROM_VBLIRQ) != 0);
> - diff = 1;
> - }
> -
> DRM_DEBUG_VBL("updating vblank count on crtc %u:"
> " current=%u, diff=%u, hw=%u hw_last=%u\n",
> pipe, vblank->count, diff, cur_vblank, vblank->last);
> @@ -420,6 +359,7 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
> init_waitqueue_head(&vblank->queue);
> setup_timer(&vblank->disable_timer, vblank_disable_fn,
> (unsigned long)vblank);
> + seqlock_init(&vblank->seqlock);
> }
>
> DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
> @@ -991,25 +931,19 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
> struct timeval *vblanktime)
> {
> struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
> - int count = DRM_TIMESTAMP_MAXRETRIES;
> - u32 cur_vblank;
> + u32 vblank_count;
> + unsigned int seq;
>
> if (WARN_ON(pipe >= dev->num_crtcs))
> return 0;
>
> - /*
> - * Vblank timestamps are read lockless. To ensure consistency the vblank
> - * counter is rechecked and ordering is ensured using memory barriers.
> - * This works like a seqlock. The write-side barriers are in store_vblank.
> - */
> do {
> - cur_vblank = vblank->count;
> - smp_rmb();
> - *vblanktime = vblanktimestamp(dev, pipe, cur_vblank);
> - smp_rmb();
> - } while (cur_vblank != vblank->count && --count > 0);
> + seq = read_seqbegin(&vblank->seqlock);
> + vblank_count = vblank->count;
> + *vblanktime = vblank->time;
> + } while (read_seqretry(&vblank->seqlock, seq));
>
> - return cur_vblank;
> + return vblank_count;
> }
> EXPORT_SYMBOL(drm_vblank_count_and_time);
>
> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> index 360b2a7..9f33090 100644
> --- a/include/drm/drmP.h
> +++ b/include/drm/drmP.h
> @@ -52,6 +52,7 @@
> #include <linux/poll.h>
> #include <linux/ratelimit.h>
> #include <linux/sched.h>
> +#include <linux/seqlock.h>
> #include <linux/slab.h>
> #include <linux/types.h>
> #include <linux/vmalloc.h>
> @@ -392,11 +393,6 @@ struct drm_master {
> void *driver_priv;
> };
>
> -/* Size of ringbuffer for vblank timestamps. Just double-buffer
> - * in initial implementation.
> - */
> -#define DRM_VBLANKTIME_RBSIZE 2
> -
> /* Flags and return codes for get_vblank_timestamp() driver function. */
> #define DRM_CALLED_FROM_VBLIRQ 1
> #define DRM_VBLANKTIME_SCANOUTPOS_METHOD (1 << 0)
> @@ -725,10 +721,10 @@ struct drm_vblank_crtc {
> wait_queue_head_t queue; /**< VBLANK wait queue */
> struct timer_list disable_timer; /* delayed disable timer */
>
> - /* vblank counter, protected by dev->vblank_time_lock for writes */
> - u32 count;
> - /* vblank timestamps, protected by dev->vblank_time_lock for writes */
> - struct timeval time[DRM_VBLANKTIME_RBSIZE];
> + seqlock_t seqlock; /* protects vblank count and time */
> +
> + u32 count; /* vblank counter */
> + struct timeval time; /* vblank timestamp */
>
> atomic_t refcount; /* number of users of vblank interruptsper crtc */
> u32 last; /* protected by dev->vbl_lock, used */
> --
> 2.4.11
--
Ville Syrjälä
Intel OTC
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] drm: use seqlock for vblank time/count
2016-05-24 12:20 ` Ville Syrjälä
@ 2016-05-24 14:54 ` Daniel Vetter
0 siblings, 0 replies; 5+ messages in thread
From: Daniel Vetter @ 2016-05-24 14:54 UTC (permalink / raw)
To: Ville Syrjälä; +Cc: Daniel Vetter, intel-gfx, Matthew Auld, dri-devel
On Tue, May 24, 2016 at 03:20:54PM +0300, Ville Syrjälä wrote:
> On Tue, May 10, 2016 at 03:21:28PM +0100, Matthew Auld wrote:
> > This patch aims to replace the roll-your-own seqlock implementation with
> > full-blown seqlock'. We also remove the timestamp ring-buffer in favour
> > of single timestamp/count pair protected by a seqlock. In turn this
> > means we can now increment the vblank freely without the need for
> > clamping.
> >
> > v2:
> > - reduce the scope of the seqlock, keeping vblank_time_lock
> > - make the seqlock per vblank_crtc, so multiple readers aren't blocked by
> > the writer
> >
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>
> LGTM
>
> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Applied to drm-misc, thanks.
-Daniel
>
> > ---
> > drivers/gpu/drm/drm_irq.c | 90 +++++++----------------------------------------
> > include/drm/drmP.h | 14 +++-----
> > 2 files changed, 17 insertions(+), 87 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> > index 3c1a6f1..0e95100 100644
> > --- a/drivers/gpu/drm/drm_irq.c
> > +++ b/drivers/gpu/drm/drm_irq.c
> > @@ -42,10 +42,6 @@
> > #include <linux/vgaarb.h>
> > #include <linux/export.h>
> >
> > -/* Access macro for slots in vblank timestamp ringbuffer. */
> > -#define vblanktimestamp(dev, pipe, count) \
> > - ((dev)->vblank[pipe].time[(count) % DRM_VBLANKTIME_RBSIZE])
> > -
> > /* Retry timestamp calculation up to 3 times to satisfy
> > * drm_timestamp_precision before giving up.
> > */
> > @@ -82,29 +78,15 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> > struct timeval *t_vblank, u32 last)
> > {
> > struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
> > - u32 tslot;
> >
> > assert_spin_locked(&dev->vblank_time_lock);
> >
> > vblank->last = last;
> >
> > - /* All writers hold the spinlock, but readers are serialized by
> > - * the latching of vblank->count below.
> > - */
> > - tslot = vblank->count + vblank_count_inc;
> > - vblanktimestamp(dev, pipe, tslot) = *t_vblank;
> > -
> > - /*
> > - * vblank timestamp updates are protected on the write side with
> > - * vblank_time_lock, but on the read side done locklessly using a
> > - * sequence-lock on the vblank counter. Ensure correct ordering using
> > - * memory barrriers. We need the barrier both before and also after the
> > - * counter update to synchronize with the next timestamp write.
> > - * The read-side barriers for this are in drm_vblank_count_and_time.
> > - */
> > - smp_wmb();
> > + write_seqlock(&vblank->seqlock);
> > + vblank->time = *t_vblank;
> > vblank->count += vblank_count_inc;
> > - smp_wmb();
> > + write_sequnlock(&vblank->seqlock);
> > }
> >
> > /**
> > @@ -205,7 +187,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
> > const struct timeval *t_old;
> > u64 diff_ns;
> >
> > - t_old = &vblanktimestamp(dev, pipe, vblank->count);
> > + t_old = &vblank->time;
> > diff_ns = timeval_to_ns(&t_vblank) - timeval_to_ns(t_old);
> >
> > /*
> > @@ -239,49 +221,6 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
> > diff = 1;
> > }
> >
> > - /*
> > - * FIMXE: Need to replace this hack with proper seqlocks.
> > - *
> > - * Restrict the bump of the software vblank counter to a safe maximum
> > - * value of +1 whenever there is the possibility that concurrent readers
> > - * of vblank timestamps could be active at the moment, as the current
> > - * implementation of the timestamp caching and updating is not safe
> > - * against concurrent readers for calls to store_vblank() with a bump
> > - * of anything but +1. A bump != 1 would very likely return corrupted
> > - * timestamps to userspace, because the same slot in the cache could
> > - * be concurrently written by store_vblank() and read by one of those
> > - * readers without the read-retry logic detecting the collision.
> > - *
> > - * Concurrent readers can exist when we are called from the
> > - * drm_vblank_off() or drm_vblank_on() functions and other non-vblank-
> > - * irq callers. However, all those calls to us are happening with the
> > - * vbl_lock locked to prevent drm_vblank_get(), so the vblank refcount
> > - * can't increase while we are executing. Therefore a zero refcount at
> > - * this point is safe for arbitrary counter bumps if we are called
> > - * outside vblank irq, a non-zero count is not 100% safe. Unfortunately
> > - * we must also accept a refcount of 1, as whenever we are called from
> > - * drm_vblank_get() -> drm_vblank_enable() the refcount will be 1 and
> > - * we must let that one pass through in order to not lose vblank counts
> > - * during vblank irq off - which would completely defeat the whole
> > - * point of this routine.
> > - *
> > - * Whenever we are called from vblank irq, we have to assume concurrent
> > - * readers exist or can show up any time during our execution, even if
> > - * the refcount is currently zero, as vblank irqs are usually only
> > - * enabled due to the presence of readers, and because when we are called
> > - * from vblank irq we can't hold the vbl_lock to protect us from sudden
> > - * bumps in vblank refcount. Therefore also restrict bumps to +1 when
> > - * called from vblank irq.
> > - */
> > - if ((diff > 1) && (atomic_read(&vblank->refcount) > 1 ||
> > - (flags & DRM_CALLED_FROM_VBLIRQ))) {
> > - DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u "
> > - "refcount %u, vblirq %u\n", pipe, diff,
> > - atomic_read(&vblank->refcount),
> > - (flags & DRM_CALLED_FROM_VBLIRQ) != 0);
> > - diff = 1;
> > - }
> > -
> > DRM_DEBUG_VBL("updating vblank count on crtc %u:"
> > " current=%u, diff=%u, hw=%u hw_last=%u\n",
> > pipe, vblank->count, diff, cur_vblank, vblank->last);
> > @@ -420,6 +359,7 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
> > init_waitqueue_head(&vblank->queue);
> > setup_timer(&vblank->disable_timer, vblank_disable_fn,
> > (unsigned long)vblank);
> > + seqlock_init(&vblank->seqlock);
> > }
> >
> > DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
> > @@ -991,25 +931,19 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
> > struct timeval *vblanktime)
> > {
> > struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
> > - int count = DRM_TIMESTAMP_MAXRETRIES;
> > - u32 cur_vblank;
> > + u32 vblank_count;
> > + unsigned int seq;
> >
> > if (WARN_ON(pipe >= dev->num_crtcs))
> > return 0;
> >
> > - /*
> > - * Vblank timestamps are read lockless. To ensure consistency the vblank
> > - * counter is rechecked and ordering is ensured using memory barriers.
> > - * This works like a seqlock. The write-side barriers are in store_vblank.
> > - */
> > do {
> > - cur_vblank = vblank->count;
> > - smp_rmb();
> > - *vblanktime = vblanktimestamp(dev, pipe, cur_vblank);
> > - smp_rmb();
> > - } while (cur_vblank != vblank->count && --count > 0);
> > + seq = read_seqbegin(&vblank->seqlock);
> > + vblank_count = vblank->count;
> > + *vblanktime = vblank->time;
> > + } while (read_seqretry(&vblank->seqlock, seq));
> >
> > - return cur_vblank;
> > + return vblank_count;
> > }
> > EXPORT_SYMBOL(drm_vblank_count_and_time);
> >
> > diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> > index 360b2a7..9f33090 100644
> > --- a/include/drm/drmP.h
> > +++ b/include/drm/drmP.h
> > @@ -52,6 +52,7 @@
> > #include <linux/poll.h>
> > #include <linux/ratelimit.h>
> > #include <linux/sched.h>
> > +#include <linux/seqlock.h>
> > #include <linux/slab.h>
> > #include <linux/types.h>
> > #include <linux/vmalloc.h>
> > @@ -392,11 +393,6 @@ struct drm_master {
> > void *driver_priv;
> > };
> >
> > -/* Size of ringbuffer for vblank timestamps. Just double-buffer
> > - * in initial implementation.
> > - */
> > -#define DRM_VBLANKTIME_RBSIZE 2
> > -
> > /* Flags and return codes for get_vblank_timestamp() driver function. */
> > #define DRM_CALLED_FROM_VBLIRQ 1
> > #define DRM_VBLANKTIME_SCANOUTPOS_METHOD (1 << 0)
> > @@ -725,10 +721,10 @@ struct drm_vblank_crtc {
> > wait_queue_head_t queue; /**< VBLANK wait queue */
> > struct timer_list disable_timer; /* delayed disable timer */
> >
> > - /* vblank counter, protected by dev->vblank_time_lock for writes */
> > - u32 count;
> > - /* vblank timestamps, protected by dev->vblank_time_lock for writes */
> > - struct timeval time[DRM_VBLANKTIME_RBSIZE];
> > + seqlock_t seqlock; /* protects vblank count and time */
> > +
> > + u32 count; /* vblank counter */
> > + struct timeval time; /* vblank timestamp */
> >
> > atomic_t refcount; /* number of users of vblank interruptsper crtc */
> > u32 last; /* protected by dev->vbl_lock, used */
> > --
> > 2.4.11
>
> --
> Ville Syrjälä
> Intel OTC
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-05-24 14:54 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-10 14:21 [PATCH] drm: use seqlock for vblank time/count Matthew Auld
2016-05-10 14:55 ` ✗ Ro.CI.BAT: failure for " Patchwork
2016-05-11 9:48 ` [PATCH] " Matthew Auld
2016-05-24 12:20 ` Ville Syrjälä
2016-05-24 14:54 ` Daniel Vetter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox