linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not.
@ 2013-02-07 15:18 Maarten Lankhorst
  2013-02-07 15:18 ` Maarten Lankhorst
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-07 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, a.p.zijlstra, daniel.vetter, x86, dri-devel,
	linaro-mm-sig, robclark, tglx, mingo, linux-media

This will allow me to call functions that have multiple arguments if fastpath fails.
This is required to support ticket mutexes, because they need to be able to pass an
extra argument to the fail function.

Originally I duplicated the functions, by adding __mutex_fastpath_lock_retval_arg.
This ended up being just a duplication of the existing function, so a way to test
if fastpath was called ended up being better.

This also cleaned up the reservation mutex patch some by being able to call an
atomic_set instead of atomic_xchg, and making it easier to detect if the wrong
unlock function was previously used.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 arch/ia64/include/asm/mutex.h    |   10 ++++------
 arch/powerpc/include/asm/mutex.h |   10 ++++------
 arch/sh/include/asm/mutex-llsc.h |    4 ++--
 arch/x86/include/asm/mutex_32.h  |   11 ++++-------
 arch/x86/include/asm/mutex_64.h  |   11 ++++-------
 include/asm-generic/mutex-dec.h  |   10 ++++------
 include/asm-generic/mutex-null.h |    2 +-
 include/asm-generic/mutex-xchg.h |   10 ++++------
 kernel/mutex.c                   |   32 ++++++++++++++------------------
 9 files changed, 41 insertions(+), 59 deletions(-)

diff --git a/arch/ia64/include/asm/mutex.h b/arch/ia64/include/asm/mutex.h
index bed73a6..f41e66d 100644
--- a/arch/ia64/include/asm/mutex.h
+++ b/arch/ia64/include/asm/mutex.h
@@ -29,17 +29,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(ia64_fetchadd4_acq(count, -1) != 1))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/arch/powerpc/include/asm/mutex.h b/arch/powerpc/include/asm/mutex.h
index 5399f7e..127ab23 100644
--- a/arch/powerpc/include/asm/mutex.h
+++ b/arch/powerpc/include/asm/mutex.h
@@ -82,17 +82,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(__mutex_dec_return_lock(count) < 0))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-llsc.h
index 090358a..dad29b6 100644
--- a/arch/sh/include/asm/mutex-llsc.h
+++ b/arch/sh/include/asm/mutex-llsc.h
@@ -37,7 +37,7 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
 }
 
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	int __done, __res;
 
@@ -51,7 +51,7 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
 		: "t");
 
 	if (unlikely(!__done || __res != 0))
-		__res = fail_fn(count);
+		__res = -1;
 
 	return __res;
 }
diff --git a/arch/x86/include/asm/mutex_32.h b/arch/x86/include/asm/mutex_32.h
index 03f90c8..b7f6b34 100644
--- a/arch/x86/include/asm/mutex_32.h
+++ b/arch/x86/include/asm/mutex_32.h
@@ -42,17 +42,14 @@ do {								\
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if it
- * wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or 1 otherwise.
  */
-static inline int __mutex_fastpath_lock_retval(atomic_t *count,
-					       int (*fail_fn)(atomic_t *))
+static inline int __mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	else
 		return 0;
 }
diff --git a/arch/x86/include/asm/mutex_64.h b/arch/x86/include/asm/mutex_64.h
index 68a87b0..2c543ff 100644
--- a/arch/x86/include/asm/mutex_64.h
+++ b/arch/x86/include/asm/mutex_64.h
@@ -37,17 +37,14 @@ do {								\
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
-static inline int __mutex_fastpath_lock_retval(atomic_t *count,
-					       int (*fail_fn)(atomic_t *))
+static inline int __mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	else
 		return 0;
 }
diff --git a/include/asm-generic/mutex-dec.h b/include/asm-generic/mutex-dec.h
index f104af7..d4f9fb4 100644
--- a/include/asm-generic/mutex-dec.h
+++ b/include/asm-generic/mutex-dec.h
@@ -28,17 +28,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/include/asm-generic/mutex-null.h b/include/asm-generic/mutex-null.h
index e1bbbc7..efd6206 100644
--- a/include/asm-generic/mutex-null.h
+++ b/include/asm-generic/mutex-null.h
@@ -11,7 +11,7 @@
 #define _ASM_GENERIC_MUTEX_NULL_H
 
 #define __mutex_fastpath_lock(count, fail_fn)		fail_fn(count)
-#define __mutex_fastpath_lock_retval(count, fail_fn)	fail_fn(count)
+#define __mutex_fastpath_lock_retval(count, fail_fn)	(-1)
 #define __mutex_fastpath_unlock(count, fail_fn)		fail_fn(count)
 #define __mutex_fastpath_trylock(count, fail_fn)	fail_fn(count)
 #define __mutex_slowpath_needs_to_unlock()		1
diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-xchg.h
index c04e0db..f169ec0 100644
--- a/include/asm-generic/mutex-xchg.h
+++ b/include/asm-generic/mutex-xchg.h
@@ -39,18 +39,16 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if it
- * wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_xchg(count, 0) != 1))
 		if (likely(atomic_xchg(count, -1) != 1))
-			return fail_fn(count);
+			return -1;
 	return 0;
 }
 
diff --git a/kernel/mutex.c b/kernel/mutex.c
index a307cc9..5ac4522 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -350,10 +350,10 @@ __mutex_unlock_slowpath(atomic_t *lock_count)
  * mutex_lock_interruptible() and mutex_trylock().
  */
 static noinline int __sched
-__mutex_lock_killable_slowpath(atomic_t *lock_count);
+__mutex_lock_killable_slowpath(struct mutex *lock);
 
 static noinline int __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count);
+__mutex_lock_interruptible_slowpath(struct mutex *lock);
 
 /**
  * mutex_lock_interruptible - acquire the mutex, interruptible
@@ -371,12 +371,12 @@ int __sched mutex_lock_interruptible(struct mutex *lock)
 	int ret;
 
 	might_sleep();
-	ret =  __mutex_fastpath_lock_retval
-			(&lock->count, __mutex_lock_interruptible_slowpath);
-	if (!ret)
+	ret =  __mutex_fastpath_lock_retval(&lock->count);
+	if (likely(!ret)) {
 		mutex_set_owner(lock);
-
-	return ret;
+		return 0;
+	} else
+		return __mutex_lock_interruptible_slowpath(lock);
 }
 
 EXPORT_SYMBOL(mutex_lock_interruptible);
@@ -386,12 +386,12 @@ int __sched mutex_lock_killable(struct mutex *lock)
 	int ret;
 
 	might_sleep();
-	ret = __mutex_fastpath_lock_retval
-			(&lock->count, __mutex_lock_killable_slowpath);
-	if (!ret)
+	ret = __mutex_fastpath_lock_retval(&lock->count);
+	if (likely(!ret)) {
 		mutex_set_owner(lock);
-
-	return ret;
+		return 0;
+	} else
+		return __mutex_lock_killable_slowpath(lock);
 }
 EXPORT_SYMBOL(mutex_lock_killable);
 
@@ -404,18 +404,14 @@ __mutex_lock_slowpath(atomic_t *lock_count)
 }
 
 static noinline int __sched
-__mutex_lock_killable_slowpath(atomic_t *lock_count)
+__mutex_lock_killable_slowpath(struct mutex *lock)
 {
-	struct mutex *lock = container_of(lock_count, struct mutex, count);
-
 	return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_);
 }
 
 static noinline int __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count)
+__mutex_lock_interruptible_slowpath(struct mutex *lock)
 {
-	struct mutex *lock = container_of(lock_count, struct mutex, count);
-
 	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_);
 }
 #endif

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not.
  2013-02-07 15:18 [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not Maarten Lankhorst
@ 2013-02-07 15:18 ` Maarten Lankhorst
  2013-02-07 15:18 ` [PATCH 2/3] mutex: add support for reservation style locks Maarten Lankhorst
  2013-02-07 15:18 ` [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c Maarten Lankhorst
  2 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-07 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, a.p.zijlstra, daniel.vetter, x86, dri-devel,
	linaro-mm-sig, robclark, tglx, mingo, linux-media

This will allow me to call functions that have multiple arguments if fastpath fails.
This is required to support ticket mutexes, because they need to be able to pass an
extra argument to the fail function.

Originally I duplicated the functions, by adding __mutex_fastpath_lock_retval_arg.
This ended up being just a duplication of the existing function, so a way to test
if fastpath was called ended up being better.

This also cleaned up the reservation mutex patch some by being able to call an
atomic_set instead of atomic_xchg, and making it easier to detect if the wrong
unlock function was previously used.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 arch/ia64/include/asm/mutex.h    |   10 ++++------
 arch/powerpc/include/asm/mutex.h |   10 ++++------
 arch/sh/include/asm/mutex-llsc.h |    4 ++--
 arch/x86/include/asm/mutex_32.h  |   11 ++++-------
 arch/x86/include/asm/mutex_64.h  |   11 ++++-------
 include/asm-generic/mutex-dec.h  |   10 ++++------
 include/asm-generic/mutex-null.h |    2 +-
 include/asm-generic/mutex-xchg.h |   10 ++++------
 kernel/mutex.c                   |   32 ++++++++++++++------------------
 9 files changed, 41 insertions(+), 59 deletions(-)

diff --git a/arch/ia64/include/asm/mutex.h b/arch/ia64/include/asm/mutex.h
index bed73a6..f41e66d 100644
--- a/arch/ia64/include/asm/mutex.h
+++ b/arch/ia64/include/asm/mutex.h
@@ -29,17 +29,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(ia64_fetchadd4_acq(count, -1) != 1))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/arch/powerpc/include/asm/mutex.h b/arch/powerpc/include/asm/mutex.h
index 5399f7e..127ab23 100644
--- a/arch/powerpc/include/asm/mutex.h
+++ b/arch/powerpc/include/asm/mutex.h
@@ -82,17 +82,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(__mutex_dec_return_lock(count) < 0))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-llsc.h
index 090358a..dad29b6 100644
--- a/arch/sh/include/asm/mutex-llsc.h
+++ b/arch/sh/include/asm/mutex-llsc.h
@@ -37,7 +37,7 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
 }
 
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	int __done, __res;
 
@@ -51,7 +51,7 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
 		: "t");
 
 	if (unlikely(!__done || __res != 0))
-		__res = fail_fn(count);
+		__res = -1;
 
 	return __res;
 }
diff --git a/arch/x86/include/asm/mutex_32.h b/arch/x86/include/asm/mutex_32.h
index 03f90c8..b7f6b34 100644
--- a/arch/x86/include/asm/mutex_32.h
+++ b/arch/x86/include/asm/mutex_32.h
@@ -42,17 +42,14 @@ do {								\
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if it
- * wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or 1 otherwise.
  */
-static inline int __mutex_fastpath_lock_retval(atomic_t *count,
-					       int (*fail_fn)(atomic_t *))
+static inline int __mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	else
 		return 0;
 }
diff --git a/arch/x86/include/asm/mutex_64.h b/arch/x86/include/asm/mutex_64.h
index 68a87b0..2c543ff 100644
--- a/arch/x86/include/asm/mutex_64.h
+++ b/arch/x86/include/asm/mutex_64.h
@@ -37,17 +37,14 @@ do {								\
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
-static inline int __mutex_fastpath_lock_retval(atomic_t *count,
-					       int (*fail_fn)(atomic_t *))
+static inline int __mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	else
 		return 0;
 }
diff --git a/include/asm-generic/mutex-dec.h b/include/asm-generic/mutex-dec.h
index f104af7..d4f9fb4 100644
--- a/include/asm-generic/mutex-dec.h
+++ b/include/asm-generic/mutex-dec.h
@@ -28,17 +28,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if
- * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns.
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_dec_return(count) < 0))
-		return fail_fn(count);
+		return -1;
 	return 0;
 }
 
diff --git a/include/asm-generic/mutex-null.h b/include/asm-generic/mutex-null.h
index e1bbbc7..efd6206 100644
--- a/include/asm-generic/mutex-null.h
+++ b/include/asm-generic/mutex-null.h
@@ -11,7 +11,7 @@
 #define _ASM_GENERIC_MUTEX_NULL_H
 
 #define __mutex_fastpath_lock(count, fail_fn)		fail_fn(count)
-#define __mutex_fastpath_lock_retval(count, fail_fn)	fail_fn(count)
+#define __mutex_fastpath_lock_retval(count, fail_fn)	(-1)
 #define __mutex_fastpath_unlock(count, fail_fn)		fail_fn(count)
 #define __mutex_fastpath_trylock(count, fail_fn)	fail_fn(count)
 #define __mutex_slowpath_needs_to_unlock()		1
diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-xchg.h
index c04e0db..f169ec0 100644
--- a/include/asm-generic/mutex-xchg.h
+++ b/include/asm-generic/mutex-xchg.h
@@ -39,18 +39,16 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
  *  __mutex_fastpath_lock_retval - try to take the lock by moving the count
  *                                 from 1 to a 0 value
  *  @count: pointer of type atomic_t
- *  @fail_fn: function to call if the original value was not 1
  *
- * Change the count from 1 to a value lower than 1, and call <fail_fn> if it
- * wasn't 1 originally. This function returns 0 if the fastpath succeeds,
- * or anything the slow path function returns
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
  */
 static inline int
-__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+__mutex_fastpath_lock_retval(atomic_t *count)
 {
 	if (unlikely(atomic_xchg(count, 0) != 1))
 		if (likely(atomic_xchg(count, -1) != 1))
-			return fail_fn(count);
+			return -1;
 	return 0;
 }
 
diff --git a/kernel/mutex.c b/kernel/mutex.c
index a307cc9..5ac4522 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -350,10 +350,10 @@ __mutex_unlock_slowpath(atomic_t *lock_count)
  * mutex_lock_interruptible() and mutex_trylock().
  */
 static noinline int __sched
-__mutex_lock_killable_slowpath(atomic_t *lock_count);
+__mutex_lock_killable_slowpath(struct mutex *lock);
 
 static noinline int __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count);
+__mutex_lock_interruptible_slowpath(struct mutex *lock);
 
 /**
  * mutex_lock_interruptible - acquire the mutex, interruptible
@@ -371,12 +371,12 @@ int __sched mutex_lock_interruptible(struct mutex *lock)
 	int ret;
 
 	might_sleep();
-	ret =  __mutex_fastpath_lock_retval
-			(&lock->count, __mutex_lock_interruptible_slowpath);
-	if (!ret)
+	ret =  __mutex_fastpath_lock_retval(&lock->count);
+	if (likely(!ret)) {
 		mutex_set_owner(lock);
-
-	return ret;
+		return 0;
+	} else
+		return __mutex_lock_interruptible_slowpath(lock);
 }
 
 EXPORT_SYMBOL(mutex_lock_interruptible);
@@ -386,12 +386,12 @@ int __sched mutex_lock_killable(struct mutex *lock)
 	int ret;
 
 	might_sleep();
-	ret = __mutex_fastpath_lock_retval
-			(&lock->count, __mutex_lock_killable_slowpath);
-	if (!ret)
+	ret = __mutex_fastpath_lock_retval(&lock->count);
+	if (likely(!ret)) {
 		mutex_set_owner(lock);
-
-	return ret;
+		return 0;
+	} else
+		return __mutex_lock_killable_slowpath(lock);
 }
 EXPORT_SYMBOL(mutex_lock_killable);
 
@@ -404,18 +404,14 @@ __mutex_lock_slowpath(atomic_t *lock_count)
 }
 
 static noinline int __sched
-__mutex_lock_killable_slowpath(atomic_t *lock_count)
+__mutex_lock_killable_slowpath(struct mutex *lock)
 {
-	struct mutex *lock = container_of(lock_count, struct mutex, count);
-
 	return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_);
 }
 
 static noinline int __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count)
+__mutex_lock_interruptible_slowpath(struct mutex *lock)
 {
-	struct mutex *lock = container_of(lock_count, struct mutex, count);
-
 	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_);
 }
 #endif


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-07 15:18 [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not Maarten Lankhorst
  2013-02-07 15:18 ` Maarten Lankhorst
@ 2013-02-07 15:18 ` Maarten Lankhorst
  2013-02-14 10:22   ` [Linaro-mm-sig] " Arnd Bergmann
  2013-02-07 15:18 ` [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c Maarten Lankhorst
  2 siblings, 1 reply; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-07 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, a.p.zijlstra, daniel.vetter, x86, dri-devel,
	linaro-mm-sig, robclark, tglx, mingo, linux-media

GPU's do operations that commonly involve many buffers.  Those buffers
can be shared across contexts/processes, exist in different memory
domains (for example VRAM vs system memory), and so on.  And with
PRIME / dmabuf, they can even be shared across devices.  So there are
a handful of situations where the driver needs to wait for buffers to
become ready.  If you think about this in terms of waiting on a buffer
mutex for it to become available, this presents a problem because
there is no way to guarantee that buffers appear in a execbuf/batch in
the same order in all contexts.  That is directly under control of
userspace, and a result of the sequence of GL calls that an application
makes.  Which results in the potential for deadlock.  The problem gets
more complex when you consider that the kernel may need to migrate the
buffer(s) into VRAM before the GPU operates on the buffer(s), which
may in turn require evicting some other buffers (and you don't want to
evict other buffers which are already queued up to the GPU), but for a
simplified understanding of the problem you can ignore this.

The algorithm that TTM came up with for dealing with this problem is
quite simple.  For each group of buffers (execbuf) that need to be
locked, the caller would be assigned a unique reservation_id, from a
global counter.  In case of deadlock in the process of locking all the
buffers associated with a execbuf, the one with the lowest
reservation_id wins, and the one with the higher reservation_id
unlocks all of the buffers that it has already locked, and then tries
again.

How it is used:
---------------

A very simplified version:

    int lock_execbuf(execbuf)
    {
        struct buf *res_buf = NULL;

        /* acquiring locks, before queuing up to GPU: */
        seqno = assign_global_seqno();

    retry:
        for (buf in execbuf->buffers) {
            if (buf == res_buf) {
                res_buf = NULL;
                continue;
            }
            ret = mutex_reserve_lock(&buf->lock, seqno);
            if (ret < 0)
                goto err;
        }

        /* now everything is good to go, submit job to GPU: */
        ...

        return 0;

    err:
        for (all buf2 before buf in execbuf->buffers)
            mutex_unreserve_unlock(&buf2->lock);
        if (res_buf)
            mutex_unreserve_unlock(&res_buf->lock);

        if (ret == -EAGAIN) {
            /* we lost out in a seqno race, lock and retry.. */
            mutex_reserve_lock_slow(&buf->lock, seqno);
            res_buf = buf;
            goto retry;
        }

        return ret;
    }

    int unlock_execbuf(execbuf)
    {
        /* when GPU is finished; */
        for (buf in execbuf->buffers)
            mutex_unreserve_unlock(&buf->lock);
    }

Functions:
----------

mutex_reserve_lock, and mutex_reserve_lock_interruptible:
  Lock a buffer with a reservation_id set. reservation_id must not be
  set to 0, since this is a special value that means no reservation_id.

  Normally if reservation_id is not set, or is older than the
  reservation_id that's currently set on the mutex, the behavior will
  be to wait normally.  However, if  the reservation_id is newer than
  the current reservation_id, -EAGAIN will be returned.

  These functions will return -EDEADLK instead of -EAGAIN if
  reservation_id is the same as the reservation_id that's attempted to
  lock the mutex with, since in that case you presumably attempted to
  lock the same lock twice.

mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
  Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
  This is useful when mutex_reserve_lock failed with -EAGAIN, and you
  unreserved all buffers so no deadlock can occur.

mutex_unreserve_unlock:
   Unlock a buffer reserved with one of the mutex_reserve_*lock* calls.

Missing at the moment, maybe TODO?
  * Check if lockdep warns if you unlock a lock that other locks were nested
    to.
    - spin_lock(m);
      spin_lock_nest_lock(a, m);
      spin_unlock(m);
      spin_unlock(a);
      It would be nice if this would give a splat on spin_unlock(m),
      I'm not 100% sure if it does right now, though..
  * In the *_slow calls, maybe add a check to ensure no other locks of the same
    lockdep class are nested in the lock we have to nest to?
    - This is making sure that mutex_unreserve_unlock have been called on all other locks.

Design:
  I chose for ticket_mutex to encapsulate struct mutex, so the extra memory usage and
  atomic set on init will only happen when you deliberately create a ticket lock.

  Since the mutexes are mostly meant to protect buffer object serialization in ttm, not
  much contention is expected. I could be slightly smarter with wakeups, but this would
  be at the expense at adding a field to struct mutex_waiter. This would add
  overhead to all cases where normal mutexes are used, and ticket_mutexes are less
  performance sensitive anyway since they only protect buffer objects. As a result
  I chose not to do this.

  I needed this in kernel/mutex.c because of the extensions to __lock_common, which are
  hopefully optimized away for all normal paths.

  It is not illegal to use mutex_lock and mutex_unlock on ticket mutexes. This will work,
  as long you don't mix lock/unlock calls. This is useful if you want to lock only a single
  buffer.

  All the mutex_reserve calls are nested into another lock. The idea is that the
  seqno ticket you use functions as a lockdep class you have locked too. This will
  prevent lockdep false positives on locking 2 objects of the same class. It's allowed
  because they're nested to the seqno ticket. There are extensive tests for this in the
  patch that introduces locking tests for reservation objects and reservation tickets.

Changes since RFC patch v1:
 - Updated to use atomic_long instead of atomic, since the reservation_id was a long.
 - added mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow
 - removed mutex_locked_set_reservation_id (or w/e it was called)
Changes since RFC patch v2:
 - remove use of __mutex_lock_retval_arg, add warnings when using wrong combination of
   mutex_(,reserve_)lock/unlock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 include/linux/mutex.h |   86 +++++++++++++
 kernel/mutex.c        |  324 +++++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 394 insertions(+), 16 deletions(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 9121595..602c247 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -62,6 +62,11 @@ struct mutex {
 #endif
 };
 
+struct ticket_mutex {
+	struct mutex base;
+	atomic_long_t reservation_id;
+};
+
 /*
  * This is the control structure for tasks blocked on mutex,
  * which resides on the blocked task's kernel stack:
@@ -109,12 +114,24 @@ static inline void mutex_destroy(struct mutex *lock) {}
 		__DEBUG_MUTEX_INITIALIZER(lockname) \
 		__DEP_MAP_MUTEX_INITIALIZER(lockname) }
 
+#define __TICKET_MUTEX_INITIALIZER(lockname) \
+		{ .base = __MUTEX_INITIALIZER(lockname) \
+		, .reservation_id = ATOMIC_LONG_INIT(0) }
+
 #define DEFINE_MUTEX(mutexname) \
 	struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
 
 extern void __mutex_init(struct mutex *lock, const char *name,
 			 struct lock_class_key *key);
 
+static inline void __ticket_mutex_init(struct ticket_mutex *lock,
+				       const char *name,
+				       struct lock_class_key *key)
+{
+	__mutex_init(&lock->base, name, key);
+	atomic_long_set(&lock->reservation_id, 0);
+}
+
 /**
  * mutex_is_locked - is the mutex locked
  * @lock: the mutex to be queried
@@ -133,26 +150,91 @@ static inline int mutex_is_locked(struct mutex *lock)
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
 extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
+
 extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock,
 					unsigned int subclass);
 extern int __must_check mutex_lock_killable_nested(struct mutex *lock,
 					unsigned int subclass);
 
+extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock,
+					struct lockdep_map *nest_lock,
+					unsigned long reservation_id);
+
+extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mutex *,
+					struct lockdep_map *nest_lock,
+					unsigned long reservation_id);
+
+extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock,
+				     struct lockdep_map *nest_lock,
+				     unsigned long reservation_id);
+
+extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex *,
+					struct lockdep_map *nest_lock,
+					unsigned long reservation_id);
+
 #define mutex_lock(lock) mutex_lock_nested(lock, 0)
 #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0)
 #define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0)
 
 #define mutex_lock_nest_lock(lock, nest_lock)				\
 do {									\
-	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);		\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);	\
 	_mutex_lock_nest_lock(lock, &(nest_lock)->dep_map);		\
 } while (0)
 
+#define mutex_reserve_lock(lock, nest_lock, reservation_id)		\
+({									\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);	\
+	_mutex_reserve_lock(lock, &(nest_lock)->dep_map, reservation_id);	\
+})
+
+#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id)	\
+({									\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);	\
+	_mutex_reserve_lock_interruptible(lock, &(nest_lock)->dep_map,	\
+					   reservation_id);		\
+})
+
+#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id)	\
+do {									\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);	\
+	_mutex_reserve_lock_slow(lock, &(nest_lock)->dep_map, reservation_id);	\
+} while (0)
+
+#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id)	\
+({									\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);	\
+	_mutex_reserve_lock_intr_slow(lock, &(nest_lock)->dep_map,	\
+				      reservation_id);			\
+})
+
 #else
 extern void mutex_lock(struct mutex *lock);
 extern int __must_check mutex_lock_interruptible(struct mutex *lock);
 extern int __must_check mutex_lock_killable(struct mutex *lock);
 
+extern int __must_check _mutex_reserve_lock(struct ticket_mutex *lock,
+					    unsigned long reservation_id);
+extern int __must_check _mutex_reserve_lock_interruptible(struct ticket_mutex *,
+						unsigned long reservation_id);
+
+extern void _mutex_reserve_lock_slow(struct ticket_mutex *lock,
+				     unsigned long reservation_id);
+extern int __must_check _mutex_reserve_lock_intr_slow(struct ticket_mutex *,
+						unsigned long reservation_id);
+
+#define mutex_reserve_lock(lock, nest_lock, reservation_id)		\
+	_mutex_reserve_lock(lock, reservation_id)
+
+#define mutex_reserve_lock_interruptible(lock, nest_lock, reservation_id)	\
+	_mutex_reserve_lock_interruptible(lock, reservation_id)
+
+#define mutex_reserve_lock_slow(lock, nest_lock, reservation_id)	\
+	_mutex_reserve_lock_slow(lock, reservation_id)
+
+#define mutex_reserve_lock_intr_slow(lock, nest_lock, reservation_id)	\
+	_mutex_reserve_lock_intr_slow(lock, reservation_id)
+
 # define mutex_lock_nested(lock, subclass) mutex_lock(lock)
 # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock)
 # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock)
@@ -167,6 +249,8 @@ extern int __must_check mutex_lock_killable(struct mutex *lock);
  */
 extern int mutex_trylock(struct mutex *lock);
 extern void mutex_unlock(struct mutex *lock);
+extern void mutex_unreserve_unlock(struct ticket_mutex *lock);
+
 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
 
 #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
diff --git a/kernel/mutex.c b/kernel/mutex.c
index 5ac4522..432948c 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -126,16 +126,116 @@ void __sched mutex_unlock(struct mutex *lock)
 
 EXPORT_SYMBOL(mutex_unlock);
 
+/**
+ * mutex_unreserve_unlock - release the mutex
+ * @lock: the mutex to be released
+ *
+ * Unlock a mutex that has been locked by this task previously
+ * with _mutex_reserve_lock*.
+ *
+ * This function must not be used in interrupt context. Unlocking
+ * of a unlocked mutex is not allowed.
+ */
+void __sched mutex_unreserve_unlock(struct ticket_mutex *lock)
+{
+	/*
+	 * The unlocking fastpath is the 0->1 transition from 'locked'
+	 * into 'unlocked' state:
+	 */
+
+	/*
+	 * mark mutex as no longer part of a reservation, next
+	 * locker can set this again
+	 */
+#ifdef CONFIG_DEBUG_MUTEXES
+	unsigned long rid;
+
+	rid = atomic_long_xchg(&lock->reservation_id, 0);
+
+	/*
+	 * If this WARN_ON triggers, you used mutex_lock to acquire,
+	 * but released with mutex_unreserve_unlock in this call.
+	 */
+	DEBUG_LOCKS_WARN_ON(!rid);
+#else
+	atomic_long_set(&lock->reservation_id, 0);
+
+	/*
+	 * When debugging is enabled we must not clear the owner before time,
+	 * the slow path will always be taken, and that clears the owner field
+	 * after verifying that it was indeed current.
+	 */
+	mutex_clear_owner(&lock->base);
+#endif
+	__mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath);
+}
+EXPORT_SYMBOL(mutex_unreserve_unlock);
+
+static inline int __sched
+__mutex_lock_check_reserve(struct mutex *lock, unsigned long reservation_id)
+{
+	struct ticket_mutex *m = container_of(lock, struct ticket_mutex, base);
+	unsigned long cur_id;
+
+	cur_id = atomic_long_read(&m->reservation_id);
+	if (!cur_id)
+		return 0;
+
+	if (unlikely(reservation_id == cur_id))
+		return -EDEADLK;
+
+	if (unlikely(reservation_id - cur_id <= LONG_MAX))
+		return -EAGAIN;
+
+	return 0;
+}
+
+/*
+ * after acquiring lock with fastpath or when we lost out in contested
+ * slowpath, set reservation_id and wake up any waiters so they can recheck.
+ *
+ * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set,
+ * as the fastpath and opportunistic spinning are disabled in that case.
+ */
+static __always_inline void
+mutex_set_reservation_fastpath(struct ticket_mutex *lock,
+			       unsigned long reservation_id)
+{
+	unsigned long flags;
+	struct mutex_waiter *cur;
+
+	atomic_long_set(&lock->reservation_id, reservation_id);
+
+	/*
+	 * Check if lock is contended, if not there is nobody to wake up
+	 */
+	if (likely(atomic_read(&lock->base.count) == 0))
+		return;
+
+	/*
+	 * Uh oh, we raced in fastpath, wake up everyone in this case,
+	 * so they can see the new reservation_id
+	 */
+	spin_lock_mutex(&lock->base.wait_lock, flags);
+	list_for_each_entry(cur, &lock->base.wait_list, list) {
+		debug_mutex_wake_waiter(&lock->base, cur);
+		wake_up_process(cur->task);
+	}
+	spin_unlock_mutex(&lock->base.wait_lock, flags);
+}
+
 /*
  * Lock a mutex (possibly interruptible), slowpath:
  */
 static inline int __sched
 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
-		    struct lockdep_map *nest_lock, unsigned long ip)
+		    struct lockdep_map *nest_lock, unsigned long ip,
+		    unsigned long reservation_id, bool res_slow)
 {
 	struct task_struct *task = current;
 	struct mutex_waiter waiter;
 	unsigned long flags;
+	int ret;
 
 	preempt_disable();
 	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
@@ -162,6 +262,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	for (;;) {
 		struct task_struct *owner;
 
+		if (!__builtin_constant_p(reservation_id) && !res_slow) {
+			ret = __mutex_lock_check_reserve(lock, reservation_id);
+			if (ret)
+				goto err_nowait;
+		}
+
 		/*
 		 * If there's an owner, wait for it to either
 		 * release the lock or go to sleep.
@@ -172,6 +278,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 
 		if (atomic_cmpxchg(&lock->count, 1, 0) == 1) {
 			lock_acquired(&lock->dep_map, ip);
+			if (res_slow) {
+				struct ticket_mutex *m;
+				m = container_of(lock, struct ticket_mutex, base);
+
+				mutex_set_reservation_fastpath(m, reservation_id);
+			}
+
 			mutex_set_owner(lock);
 			preempt_enable();
 			return 0;
@@ -227,15 +340,16 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		 * TASK_UNINTERRUPTIBLE case.)
 		 */
 		if (unlikely(signal_pending_state(state, task))) {
-			mutex_remove_waiter(lock, &waiter,
-					    task_thread_info(task));
-			mutex_release(&lock->dep_map, 1, ip);
-			spin_unlock_mutex(&lock->wait_lock, flags);
+			ret = -EINTR;
+			goto err;
+		}
 
-			debug_mutex_free_waiter(&waiter);
-			preempt_enable();
-			return -EINTR;
+		if (!__builtin_constant_p(reservation_id) && !res_slow) {
+			ret = __mutex_lock_check_reserve(lock, reservation_id);
+			if (ret)
+				goto err;
 		}
+
 		__set_task_state(task, state);
 
 		/* didn't get the lock, go to sleep: */
@@ -250,6 +364,41 @@ done:
 	mutex_remove_waiter(lock, &waiter, current_thread_info());
 	mutex_set_owner(lock);
 
+	if (!__builtin_constant_p(reservation_id)) {
+		struct ticket_mutex *m = container_of(lock,
+						      struct ticket_mutex,
+						      base);
+		struct mutex_waiter *cur;
+
+		/*
+		 * this should get optimized out for the common case,
+		 * and is only important for _mutex_reserve_lock
+		 */
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+		unsigned long old_id;
+		old_id = atomic_long_xchg(&m->reservation_id, reservation_id);
+
+		/*
+		 * If this WARN_ON triggers, you used mutex_lock to acquire,
+		 * but released with mutex_unreserve_unlock in this call.
+		 */
+		DEBUG_LOCKS_WARN_ON(old_id);
+#else
+		atomic_long_set(&m->reservation_id, reservation_id);
+#endif
+
+		/*
+		 * give any possible sleeping processes the chance to wake up,
+		 * so they can recheck if they have to back off from
+		 * reservations
+		 */
+		list_for_each_entry(cur, &lock->wait_list, list) {
+			debug_mutex_wake_waiter(lock, cur);
+			wake_up_process(cur->task);
+		}
+	}
+
 	/* set it to 0 if there are no waiters left: */
 	if (likely(list_empty(&lock->wait_list)))
 		atomic_set(&lock->count, 0);
@@ -260,6 +409,19 @@ done:
 	preempt_enable();
 
 	return 0;
+
+err:
+	mutex_remove_waiter(lock, &waiter, task_thread_info(task));
+	spin_unlock_mutex(&lock->wait_lock, flags);
+	debug_mutex_free_waiter(&waiter);
+
+#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+err_nowait:
+#endif
+	mutex_release(&lock->dep_map, 1, ip);
+
+	preempt_enable();
+	return ret;
 }
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -267,7 +429,8 @@ void __sched
 mutex_lock_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
-	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_);
+	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
+			    subclass, NULL, _RET_IP_, 0, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_nested);
@@ -276,7 +439,8 @@ void __sched
 _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
 {
 	might_sleep();
-	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
+	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
+			    0, nest, _RET_IP_, 0, 0);
 }
 
 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
@@ -285,7 +449,8 @@ int __sched
 mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
-	return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_);
+	return __mutex_lock_common(lock, TASK_KILLABLE,
+				   subclass, NULL, _RET_IP_, 0, 0);
 }
 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested);
 
@@ -294,10 +459,63 @@ mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
 	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE,
-				   subclass, NULL, _RET_IP_);
+				   subclass, NULL, _RET_IP_, 0, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested);
+
+int __sched
+_mutex_reserve_lock(struct ticket_mutex *lock, struct lockdep_map *nest,
+		    unsigned long reservation_id)
+{
+	DEBUG_LOCKS_WARN_ON(!reservation_id);
+
+	might_sleep();
+	return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE,
+				   0, nest, _RET_IP_, reservation_id, 0);
+}
+EXPORT_SYMBOL_GPL(_mutex_reserve_lock);
+
+
+int __sched
+_mutex_reserve_lock_interruptible(struct ticket_mutex *lock,
+				  struct lockdep_map *nest,
+				  unsigned long reservation_id)
+{
+	DEBUG_LOCKS_WARN_ON(!reservation_id);
+
+	might_sleep();
+	return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE,
+				   0, nest, _RET_IP_, reservation_id, 0);
+}
+EXPORT_SYMBOL_GPL(_mutex_reserve_lock_interruptible);
+
+void __sched
+_mutex_reserve_lock_slow(struct ticket_mutex *lock, struct lockdep_map *nest,
+			 unsigned long reservation_id)
+{
+	DEBUG_LOCKS_WARN_ON(!reservation_id);
+
+	might_sleep();
+	__mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
+			    nest, _RET_IP_, reservation_id, 1);
+}
+EXPORT_SYMBOL_GPL(_mutex_reserve_lock_slow);
+
+int __sched
+_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock,
+			      struct lockdep_map *nest,
+			      unsigned long reservation_id)
+{
+	DEBUG_LOCKS_WARN_ON(!reservation_id);
+
+	might_sleep();
+	return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
+				   nest, _RET_IP_, reservation_id, 1);
+}
+EXPORT_SYMBOL_GPL(_mutex_reserve_lock_intr_slow);
+
+
 #endif
 
 /*
@@ -400,20 +618,39 @@ __mutex_lock_slowpath(atomic_t *lock_count)
 {
 	struct mutex *lock = container_of(lock_count, struct mutex, count);
 
-	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_);
+	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0,
+			    NULL, _RET_IP_, 0, 0);
 }
 
 static noinline int __sched
 __mutex_lock_killable_slowpath(struct mutex *lock)
 {
-	return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_);
+	return __mutex_lock_common(lock, TASK_KILLABLE, 0,
+				   NULL, _RET_IP_, 0, 0);
 }
 
 static noinline int __sched
 __mutex_lock_interruptible_slowpath(struct mutex *lock)
 {
-	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_);
+	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0,
+				   NULL, _RET_IP_, 0, 0);
+}
+
+static noinline int __sched
+__mutex_lock_reserve_slowpath(struct ticket_mutex *lock, unsigned long rid)
+{
+	return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
+				   NULL, _RET_IP_, rid, 0);
+}
+
+static noinline int __sched
+__mutex_lock_interruptible_reserve_slowpath(struct ticket_mutex *lock,
+					    unsigned long rid)
+{
+	return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
+				   NULL, _RET_IP_, rid, 0);
 }
+
 #endif
 
 /*
@@ -469,6 +706,63 @@ int __sched mutex_trylock(struct mutex *lock)
 }
 EXPORT_SYMBOL(mutex_trylock);
 
+#ifndef CONFIG_DEBUG_LOCK_ALLOC
+int __sched
+_mutex_reserve_lock(struct ticket_mutex *lock, unsigned long rid)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		mutex_set_reservation_fastpath(lock, rid);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __mutex_lock_reserve_slowpath(lock, rid);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_reserve_lock);
+
+int __sched
+_mutex_reserve_lock_interruptible(struct ticket_mutex *lock, unsigned long rid)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		mutex_set_reservation_fastpath(lock, rid);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __mutex_lock_interruptible_reserve_slowpath(lock, rid);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_reserve_lock_interruptible);
+
+void __sched
+_mutex_reserve_lock_slow(struct ticket_mutex *lock, unsigned long rid)
+{
+	might_sleep();
+	__mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE,
+			    0, NULL, _RET_IP_, rid, 1);
+}
+EXPORT_SYMBOL(_mutex_reserve_lock_slow);
+
+int __sched
+_mutex_reserve_lock_intr_slow(struct ticket_mutex *lock, unsigned long rid)
+{
+	might_sleep();
+	return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE,
+				   0, NULL, _RET_IP_, rid, 1);
+}
+EXPORT_SYMBOL(_mutex_reserve_lock_intr_slow);
+
+#endif
+
 /**
  * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
  * @cnt: the atomic which we are to dec

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c.
  2013-02-07 15:18 [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not Maarten Lankhorst
  2013-02-07 15:18 ` Maarten Lankhorst
  2013-02-07 15:18 ` [PATCH 2/3] mutex: add support for reservation style locks Maarten Lankhorst
@ 2013-02-07 15:18 ` Maarten Lankhorst
  2013-02-07 15:18   ` Maarten Lankhorst
  2 siblings, 1 reply; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-07 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, a.p.zijlstra, daniel.vetter, x86, dri-devel,
	linaro-mm-sig, robclark, tglx, mingo, linux-media

This stresses the lockdep code in some ways specifically useful to
reservations. It adds checks for most of the common locking errors.

Since the lockdep tests were originally written to stress the
reservation code, I duplicated some of the definitions into
lib/locking-selftest.c for now.

This will be cleaned up later when the api for reservations is
accepted. I don't expect the tests to change, since the discussion
is mostly about the fence aspect of reservations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 lib/locking-selftest.c |  522 ++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 503 insertions(+), 19 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 7aae0f2..a3959af 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -26,6 +26,67 @@
  */
 static unsigned int debug_locks_verbose;
 
+/*
+ * These definitions are from the reservation objects patch series.
+ * For now we have to define it ourselves here. These definitions will
+ * be removed upon acceptance of that patch series.
+ */
+static const char reservation_object_name[] = "reservation_object";
+static struct lock_class_key reservation_object_class;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+static const char reservation_ticket_name[] = "reservation_ticket";
+static struct lock_class_key reservation_ticket_class;
+#endif
+
+struct reservation_object {
+	struct ticket_mutex lock;
+};
+
+struct reservation_ticket {
+	unsigned long seqno;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map dep_map;
+#endif
+};
+
+static inline void
+reservation_object_init(struct reservation_object *obj)
+{
+	__ticket_mutex_init(&obj->lock, reservation_object_name,
+			    &reservation_object_class);
+}
+
+static inline void
+reservation_object_fini(struct reservation_object *obj)
+{
+	mutex_destroy(&obj->lock.base);
+}
+
+static inline void
+reservation_ticket_init(struct reservation_ticket *t)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held ticket:
+	 */
+
+	debug_check_no_locks_freed((void *)t, sizeof(*t));
+	lockdep_init_map(&t->dep_map, reservation_ticket_name,
+			 &reservation_ticket_class, 0);
+#endif
+	mutex_acquire(&t->dep_map, 0, 0, _THIS_IP_);
+	t->seqno = 5;
+}
+
+static inline void
+reservation_ticket_fini(struct reservation_ticket *t)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	mutex_release(&t->dep_map, 0, _THIS_IP_);
+	t->seqno = 0;
+#endif
+}
+
 static int __init setup_debug_locks_verbose(char *str)
 {
 	get_option(&str, &debug_locks_verbose);
@@ -42,6 +103,7 @@ __setup("debug_locks_verbose=", setup_debug_locks_verbose);
 #define LOCKTYPE_RWLOCK	0x2
 #define LOCKTYPE_MUTEX	0x4
 #define LOCKTYPE_RWSEM	0x8
+#define LOCKTYPE_RESERVATION	0x10
 
 /*
  * Normal standalone locks, for the circular and irq-context
@@ -920,11 +982,17 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
 static void reset_locks(void)
 {
 	local_irq_disable();
+	lockdep_free_key_range(&reservation_object_class, 1);
+	lockdep_free_key_range(&reservation_ticket_class, 1);
+
 	I1(A); I1(B); I1(C); I1(D);
 	I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2);
 	lockdep_reset();
 	I2(A); I2(B); I2(C); I2(D);
 	init_shared_classes();
+
+	memset(&reservation_object_class, 0, sizeof(reservation_object_class));
+	memset(&reservation_ticket_class, 0, sizeof(reservation_ticket_class));
 	local_irq_enable();
 }
 
@@ -938,7 +1006,6 @@ static int unexpected_testcase_failures;
 static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 {
 	unsigned long saved_preempt_count = preempt_count();
-	int expected_failure = 0;
 
 	WARN_ON(irqs_disabled());
 
@@ -946,26 +1013,16 @@ static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 	/*
 	 * Filter out expected failures:
 	 */
+	if (debug_locks != expected) {
 #ifndef CONFIG_PROVE_LOCKING
-	if ((lockclass_mask & LOCKTYPE_SPIN) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_RWLOCK) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_MUTEX) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_RWSEM) && debug_locks != expected)
-		expected_failure = 1;
+		expected_testcase_failures++;
+		printk("failed|");
+#else
+		unexpected_testcase_failures++;
+		printk("FAILED|");
+
+		dump_stack();
 #endif
-	if (debug_locks != expected) {
-		if (expected_failure) {
-			expected_testcase_failures++;
-			printk("failed|");
-		} else {
-			unexpected_testcase_failures++;
-
-			printk("FAILED|");
-			dump_stack();
-		}
 	} else {
 		testcase_successes++;
 		printk("  ok  |");
@@ -1108,6 +1165,431 @@ static inline void print_testname(const char *testname)
 	DO_TESTCASE_6IRW(desc, name, 312);			\
 	DO_TESTCASE_6IRW(desc, name, 321);
 
+static void reservation_test_fail_reserve(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_ticket_init(&t);
+	t.seqno++;
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+
+	BUG_ON(!atomic_long_read(&o.lock.reservation_id));
+
+	/* No lockdep test, pure API */
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret != -EDEADLK);
+
+	t.seqno++;
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(ret);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret != -EAGAIN);
+	mutex_unlock(&o.lock.base);
+
+	if (mutex_trylock(&o.lock.base))
+		mutex_unlock(&o.lock.base);
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	else
+		DEBUG_LOCKS_WARN_ON(1);
+#endif
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_two_tickets(void)
+{
+	struct reservation_ticket t, t2;
+
+	reservation_ticket_init(&t);
+	reservation_ticket_init(&t2);
+
+	reservation_ticket_fini(&t2);
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_unreserve_twice(void)
+{
+	struct reservation_ticket t;
+
+	reservation_ticket_init(&t);
+	reservation_ticket_fini(&t);
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_object_unreserve_twice(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+	mutex_lock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_fence_nest_unreserved(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+
+	spin_lock_nest_lock(&lock_A, &o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_mismatch_normal_reserve(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+
+	mutex_lock(&o.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+}
+
+static void reservation_test_mismatch_reserve_normal(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_ticket_init(&t);
+	reservation_object_init(&o);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unlock(&o.lock.base);
+
+	/*
+	 * the second mutex_reserve_lock will detect the
+	 * mismatch of the first one
+	 */
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_mismatch_reserve_normal_slow(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_ticket_init(&t);
+	reservation_object_init(&o);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unlock(&o.lock.base);
+
+	/*
+	 * the second mutex_reserve_lock will detect the
+	 * mismatch of the first one
+	 */
+	mutex_reserve_lock_slow(&o.lock, &t, t.seqno);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_block(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_try(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_try_block(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_try_try(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_try_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unlock(&o.lock.base);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_block_block(void)
+{
+	struct reservation_object o, o2;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_block_try(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_block_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unlock(&o.lock.base);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_fence_block(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	mutex_lock(&o.lock.base);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unlock(&o.lock.base);
+
+	spin_lock(&lock_A);
+	mutex_lock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_fence_try(void)
+{
+	struct reservation_object o;
+	bool ret;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unlock(&o.lock.base);
+
+	spin_lock(&lock_A);
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_fence_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unreserve_unlock(&o.lock);
+
+	spin_lock(&lock_A);
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o.lock);
+	spin_unlock(&lock_A);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_tests(void)
+{
+	printk("  --------------------------------------------------------------------------\n");
+	printk("  | Reservation tests |\n");
+	printk("  ---------------------\n");
+
+	print_testname("reservation api failures");
+	dotest(reservation_test_fail_reserve, SUCCESS, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("reserving two tickets");
+	dotest(reservation_test_two_tickets, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("unreserve ticket twice");
+	dotest(reservation_test_ticket_unreserve_twice, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("unreserve object twice");
+	dotest(reservation_test_object_unreserve_twice, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("spinlock nest unreserved");
+	dotest(reservation_test_fence_nest_unreserved, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("mutex reserve (un)lock mismatch");
+	dotest(reservation_test_mismatch_normal_reserve, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_mismatch_reserve_normal, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_mismatch_reserve_normal_slow, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	printk("  -----------------------------------------------------\n");
+	printk("                                 |block | try  |ticket|\n");
+	printk("  -----------------------------------------------------\n");
+
+	print_testname("ticket");
+	dotest(reservation_test_ticket_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_ticket_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_ticket_ticket, SUCCESS, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("try");
+	dotest(reservation_test_try_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_try_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_try_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("block");
+	dotest(reservation_test_block_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_block_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_block_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("spinlock");
+	dotest(reservation_test_fence_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_fence_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_fence_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+}
 
 void locking_selftest(void)
 {
@@ -1188,6 +1670,8 @@ void locking_selftest(void)
 	DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
 //	DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
 
+	reservation_tests();
+
 	if (unexpected_testcase_failures) {
 		printk("-----------------------------------------------------------------\n");
 		debug_locks = 0;

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c.
  2013-02-07 15:18 ` [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c Maarten Lankhorst
@ 2013-02-07 15:18   ` Maarten Lankhorst
  0 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-07 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, a.p.zijlstra, daniel.vetter, x86, dri-devel,
	linaro-mm-sig, robclark, tglx, mingo, linux-media

This stresses the lockdep code in some ways specifically useful to
reservations. It adds checks for most of the common locking errors.

Since the lockdep tests were originally written to stress the
reservation code, I duplicated some of the definitions into
lib/locking-selftest.c for now.

This will be cleaned up later when the api for reservations is
accepted. I don't expect the tests to change, since the discussion
is mostly about the fence aspect of reservations.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
---
 lib/locking-selftest.c |  522 ++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 503 insertions(+), 19 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 7aae0f2..a3959af 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -26,6 +26,67 @@
  */
 static unsigned int debug_locks_verbose;
 
+/*
+ * These definitions are from the reservation objects patch series.
+ * For now we have to define it ourselves here. These definitions will
+ * be removed upon acceptance of that patch series.
+ */
+static const char reservation_object_name[] = "reservation_object";
+static struct lock_class_key reservation_object_class;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+static const char reservation_ticket_name[] = "reservation_ticket";
+static struct lock_class_key reservation_ticket_class;
+#endif
+
+struct reservation_object {
+	struct ticket_mutex lock;
+};
+
+struct reservation_ticket {
+	unsigned long seqno;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map dep_map;
+#endif
+};
+
+static inline void
+reservation_object_init(struct reservation_object *obj)
+{
+	__ticket_mutex_init(&obj->lock, reservation_object_name,
+			    &reservation_object_class);
+}
+
+static inline void
+reservation_object_fini(struct reservation_object *obj)
+{
+	mutex_destroy(&obj->lock.base);
+}
+
+static inline void
+reservation_ticket_init(struct reservation_ticket *t)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held ticket:
+	 */
+
+	debug_check_no_locks_freed((void *)t, sizeof(*t));
+	lockdep_init_map(&t->dep_map, reservation_ticket_name,
+			 &reservation_ticket_class, 0);
+#endif
+	mutex_acquire(&t->dep_map, 0, 0, _THIS_IP_);
+	t->seqno = 5;
+}
+
+static inline void
+reservation_ticket_fini(struct reservation_ticket *t)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	mutex_release(&t->dep_map, 0, _THIS_IP_);
+	t->seqno = 0;
+#endif
+}
+
 static int __init setup_debug_locks_verbose(char *str)
 {
 	get_option(&str, &debug_locks_verbose);
@@ -42,6 +103,7 @@ __setup("debug_locks_verbose=", setup_debug_locks_verbose);
 #define LOCKTYPE_RWLOCK	0x2
 #define LOCKTYPE_MUTEX	0x4
 #define LOCKTYPE_RWSEM	0x8
+#define LOCKTYPE_RESERVATION	0x10
 
 /*
  * Normal standalone locks, for the circular and irq-context
@@ -920,11 +982,17 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
 static void reset_locks(void)
 {
 	local_irq_disable();
+	lockdep_free_key_range(&reservation_object_class, 1);
+	lockdep_free_key_range(&reservation_ticket_class, 1);
+
 	I1(A); I1(B); I1(C); I1(D);
 	I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2);
 	lockdep_reset();
 	I2(A); I2(B); I2(C); I2(D);
 	init_shared_classes();
+
+	memset(&reservation_object_class, 0, sizeof(reservation_object_class));
+	memset(&reservation_ticket_class, 0, sizeof(reservation_ticket_class));
 	local_irq_enable();
 }
 
@@ -938,7 +1006,6 @@ static int unexpected_testcase_failures;
 static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 {
 	unsigned long saved_preempt_count = preempt_count();
-	int expected_failure = 0;
 
 	WARN_ON(irqs_disabled());
 
@@ -946,26 +1013,16 @@ static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 	/*
 	 * Filter out expected failures:
 	 */
+	if (debug_locks != expected) {
 #ifndef CONFIG_PROVE_LOCKING
-	if ((lockclass_mask & LOCKTYPE_SPIN) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_RWLOCK) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_MUTEX) && debug_locks != expected)
-		expected_failure = 1;
-	if ((lockclass_mask & LOCKTYPE_RWSEM) && debug_locks != expected)
-		expected_failure = 1;
+		expected_testcase_failures++;
+		printk("failed|");
+#else
+		unexpected_testcase_failures++;
+		printk("FAILED|");
+
+		dump_stack();
 #endif
-	if (debug_locks != expected) {
-		if (expected_failure) {
-			expected_testcase_failures++;
-			printk("failed|");
-		} else {
-			unexpected_testcase_failures++;
-
-			printk("FAILED|");
-			dump_stack();
-		}
 	} else {
 		testcase_successes++;
 		printk("  ok  |");
@@ -1108,6 +1165,431 @@ static inline void print_testname(const char *testname)
 	DO_TESTCASE_6IRW(desc, name, 312);			\
 	DO_TESTCASE_6IRW(desc, name, 321);
 
+static void reservation_test_fail_reserve(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_ticket_init(&t);
+	t.seqno++;
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+
+	BUG_ON(!atomic_long_read(&o.lock.reservation_id));
+
+	/* No lockdep test, pure API */
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret != -EDEADLK);
+
+	t.seqno++;
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(ret);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret != -EAGAIN);
+	mutex_unlock(&o.lock.base);
+
+	if (mutex_trylock(&o.lock.base))
+		mutex_unlock(&o.lock.base);
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	else
+		DEBUG_LOCKS_WARN_ON(1);
+#endif
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_two_tickets(void)
+{
+	struct reservation_ticket t, t2;
+
+	reservation_ticket_init(&t);
+	reservation_ticket_init(&t2);
+
+	reservation_ticket_fini(&t2);
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_unreserve_twice(void)
+{
+	struct reservation_ticket t;
+
+	reservation_ticket_init(&t);
+	reservation_ticket_fini(&t);
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_object_unreserve_twice(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+	mutex_lock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_fence_nest_unreserved(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+
+	spin_lock_nest_lock(&lock_A, &o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_mismatch_normal_reserve(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+
+	mutex_lock(&o.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+}
+
+static void reservation_test_mismatch_reserve_normal(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_ticket_init(&t);
+	reservation_object_init(&o);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unlock(&o.lock.base);
+
+	/*
+	 * the second mutex_reserve_lock will detect the
+	 * mismatch of the first one
+	 */
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_mismatch_reserve_normal_slow(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_ticket_init(&t);
+	reservation_object_init(&o);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unlock(&o.lock.base);
+
+	/*
+	 * the second mutex_reserve_lock will detect the
+	 * mismatch of the first one
+	 */
+	mutex_reserve_lock_slow(&o.lock, &t, t.seqno);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_block(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_try(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_ticket_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unreserve_unlock(&o.lock);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_try_block(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_try_try(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_try_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unlock(&o.lock.base);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_block_block(void)
+{
+	struct reservation_object o, o2;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	mutex_lock(&o2.lock.base);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_block_try(void)
+{
+	struct reservation_object o, o2;
+	bool ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	ret = mutex_trylock(&o2.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o2.lock.base);
+	mutex_unlock(&o.lock.base);
+}
+
+static void reservation_test_block_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o, o2;
+	int ret;
+
+	reservation_object_init(&o);
+	reservation_object_init(&o2);
+
+	mutex_lock(&o.lock.base);
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o2.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o2.lock);
+	mutex_unlock(&o.lock.base);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_test_fence_block(void)
+{
+	struct reservation_object o;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	mutex_lock(&o.lock.base);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unlock(&o.lock.base);
+
+	spin_lock(&lock_A);
+	mutex_lock(&o.lock.base);
+	mutex_unlock(&o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_fence_try(void)
+{
+	struct reservation_object o;
+	bool ret;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unlock(&o.lock.base);
+
+	spin_lock(&lock_A);
+	ret = mutex_trylock(&o.lock.base);
+	WARN_ON(!ret);
+	mutex_unlock(&o.lock.base);
+	spin_unlock(&lock_A);
+}
+
+static void reservation_test_fence_ticket(void)
+{
+	struct reservation_ticket t;
+	struct reservation_object o;
+	int ret;
+
+	reservation_object_init(&o);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+
+	reservation_ticket_init(&t);
+
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	spin_lock(&lock_A);
+	spin_unlock(&lock_A);
+	mutex_unreserve_unlock(&o.lock);
+
+	spin_lock(&lock_A);
+	ret = mutex_reserve_lock(&o.lock, &t, t.seqno);
+	WARN_ON(ret);
+	mutex_unreserve_unlock(&o.lock);
+	spin_unlock(&lock_A);
+
+	reservation_ticket_fini(&t);
+}
+
+static void reservation_tests(void)
+{
+	printk("  --------------------------------------------------------------------------\n");
+	printk("  | Reservation tests |\n");
+	printk("  ---------------------\n");
+
+	print_testname("reservation api failures");
+	dotest(reservation_test_fail_reserve, SUCCESS, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("reserving two tickets");
+	dotest(reservation_test_two_tickets, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("unreserve ticket twice");
+	dotest(reservation_test_ticket_unreserve_twice, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("unreserve object twice");
+	dotest(reservation_test_object_unreserve_twice, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("spinlock nest unreserved");
+	dotest(reservation_test_fence_nest_unreserved, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("mutex reserve (un)lock mismatch");
+	dotest(reservation_test_mismatch_normal_reserve, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_mismatch_reserve_normal, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_mismatch_reserve_normal_slow, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	printk("  -----------------------------------------------------\n");
+	printk("                                 |block | try  |ticket|\n");
+	printk("  -----------------------------------------------------\n");
+
+	print_testname("ticket");
+	dotest(reservation_test_ticket_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_ticket_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_ticket_ticket, SUCCESS, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("try");
+	dotest(reservation_test_try_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_try_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_try_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("block");
+	dotest(reservation_test_block_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_block_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_block_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+
+	print_testname("spinlock");
+	dotest(reservation_test_fence_block, FAILURE, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_fence_try, SUCCESS, LOCKTYPE_RESERVATION);
+	dotest(reservation_test_fence_ticket, FAILURE, LOCKTYPE_RESERVATION);
+	printk("\n");
+}
 
 void locking_selftest(void)
 {
@@ -1188,6 +1670,8 @@ void locking_selftest(void)
 	DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
 //	DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
 
+	reservation_tests();
+
 	if (unexpected_testcase_failures) {
 		printk("-----------------------------------------------------------------\n");
 		debug_locks = 0;


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Linaro-mm-sig] [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-07 15:18 ` [PATCH 2/3] mutex: add support for reservation style locks Maarten Lankhorst
@ 2013-02-14 10:22   ` Arnd Bergmann
  2013-02-14 10:22     ` Arnd Bergmann
                       ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Arnd Bergmann @ 2013-02-14 10:22 UTC (permalink / raw)
  To: linaro-mm-sig
  Cc: Maarten Lankhorst, linux-kernel, linux-arch, a.p.zijlstra, x86,
	dri-devel, robclark, tglx, mingo, linux-media

On Thursday 07 February 2013, Maarten Lankhorst wrote:

Hi Maarten,

I cannot help a lot on this patch set, but there are a few things that
I noticed when reading it.

> Functions:
> ----------
> 
> mutex_reserve_lock, and mutex_reserve_lock_interruptible:
>   Lock a buffer with a reservation_id set. reservation_id must not be
>   set to 0, since this is a special value that means no reservation_id.

I think the entire description should go into a file in the Documentation
directory, to make it easier to find without looking up the git history.

For the purpose of documenting this, it feels a little strange to
talk about "buffers" here. Obviously this is what you are using the
locks for, but it sounds like that is not the only possible use
case.

>   These functions will return -EDEADLK instead of -EAGAIN if
>   reservation_id is the same as the reservation_id that's attempted to
>   lock the mutex with, since in that case you presumably attempted to
>   lock the same lock twice.

Since the user always has to check the return value, would it be
possible to provide only the interruptible kind of this function
but not the non-interruptible one? In general, interruptible locks
are obviously harder to use, but they are much user friendlier when
something goes wrong.

> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
>   Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
>   This is useful when mutex_reserve_lock failed with -EAGAIN, and you
>   unreserved all buffers so no deadlock can occur.

Are these meant to be used a lot? If not, maybe prefix them with __mutex_
instead of mutex_.

> diff --git a/include/linux/mutex.h b/include/linux/mutex.h
> index 9121595..602c247 100644
> --- a/include/linux/mutex.h
> +++ b/include/linux/mutex.h
> @@ -62,6 +62,11 @@ struct mutex {
>  #endif
>  };
>  
> +struct ticket_mutex {
> +	struct mutex base;
> +	atomic_long_t reservation_id;
> +};

Have you considered changing the meaning of the "count" member
of the mutex in the case where a ticket mutex is used? That would
let you use an unmodified structure.

	Arnd

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Linaro-mm-sig] [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-14 10:22   ` [Linaro-mm-sig] " Arnd Bergmann
@ 2013-02-14 10:22     ` Arnd Bergmann
  2013-02-14 11:18     ` Daniel Vetter
  2013-02-14 14:19     ` Maarten Lankhorst
  2 siblings, 0 replies; 10+ messages in thread
From: Arnd Bergmann @ 2013-02-14 10:22 UTC (permalink / raw)
  To: linaro-mm-sig
  Cc: Maarten Lankhorst, linux-kernel, linux-arch, a.p.zijlstra, x86,
	dri-devel, robclark, tglx, mingo, linux-media

On Thursday 07 February 2013, Maarten Lankhorst wrote:

Hi Maarten,

I cannot help a lot on this patch set, but there are a few things that
I noticed when reading it.

> Functions:
> ----------
> 
> mutex_reserve_lock, and mutex_reserve_lock_interruptible:
>   Lock a buffer with a reservation_id set. reservation_id must not be
>   set to 0, since this is a special value that means no reservation_id.

I think the entire description should go into a file in the Documentation
directory, to make it easier to find without looking up the git history.

For the purpose of documenting this, it feels a little strange to
talk about "buffers" here. Obviously this is what you are using the
locks for, but it sounds like that is not the only possible use
case.

>   These functions will return -EDEADLK instead of -EAGAIN if
>   reservation_id is the same as the reservation_id that's attempted to
>   lock the mutex with, since in that case you presumably attempted to
>   lock the same lock twice.

Since the user always has to check the return value, would it be
possible to provide only the interruptible kind of this function
but not the non-interruptible one? In general, interruptible locks
are obviously harder to use, but they are much user friendlier when
something goes wrong.

> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
>   Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
>   This is useful when mutex_reserve_lock failed with -EAGAIN, and you
>   unreserved all buffers so no deadlock can occur.

Are these meant to be used a lot? If not, maybe prefix them with __mutex_
instead of mutex_.

> diff --git a/include/linux/mutex.h b/include/linux/mutex.h
> index 9121595..602c247 100644
> --- a/include/linux/mutex.h
> +++ b/include/linux/mutex.h
> @@ -62,6 +62,11 @@ struct mutex {
>  #endif
>  };
>  
> +struct ticket_mutex {
> +	struct mutex base;
> +	atomic_long_t reservation_id;
> +};

Have you considered changing the meaning of the "count" member
of the mutex in the case where a ticket mutex is used? That would
let you use an unmodified structure.

	Arnd

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Linaro-mm-sig] [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-14 10:22   ` [Linaro-mm-sig] " Arnd Bergmann
  2013-02-14 10:22     ` Arnd Bergmann
@ 2013-02-14 11:18     ` Daniel Vetter
  2013-02-14 11:18       ` Daniel Vetter
  2013-02-14 14:19     ` Maarten Lankhorst
  2 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2013-02-14 11:18 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linaro-mm-sig, linux-arch, a.p.zijlstra, x86, linux-kernel,
	dri-devel, robclark, tglx, mingo, linux-media

On Thu, Feb 14, 2013 at 11:22 AM, Arnd Bergmann <arnd@arndb.de> wrote:
>
>>   These functions will return -EDEADLK instead of -EAGAIN if
>>   reservation_id is the same as the reservation_id that's attempted to
>>   lock the mutex with, since in that case you presumably attempted to
>>   lock the same lock twice.
>
> Since the user always has to check the return value, would it be
> possible to provide only the interruptible kind of this function
> but not the non-interruptible one? In general, interruptible locks
> are obviously harder to use, but they are much user friendlier when
> something goes wrong.

At least in drm/i915 we only use _interruptible locking on the
command-submission ioctls for all locks which could be held while
waiting for the gpu. We need unwind paths and ioctl restarting anyway
to bail out on catastrophic events like gpu hangs, so signal interrupt
handling comes for free.

Otoh in the modeset code we generally don't bother with that, since
unwinding a modeset sequence mid-way is something you don't want to do
really if your sanity is dear to you. But we also never need
mutli-object reservations in the modeset code, neither can I imagine a
future need for it.

So from my side we could drop the non-interruptible interface. But I
have not checked whether dropping this would complicate the ttm
conversion.

>> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
>>   Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
>>   This is useful when mutex_reserve_lock failed with -EAGAIN, and you
>>   unreserved all buffers so no deadlock can occur.
>
> Are these meant to be used a lot? If not, maybe prefix them with __mutex_
> instead of mutex_.

If you detect an inversion in a multi-buffer reservation you have to
drop all locks and call these functions on the buffer which failed
(that's the contention point, hence it's the right lock to sleep on).
So every place using ticket locks will also call the above slowpath
functions by necessity.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Linaro-mm-sig] [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-14 11:18     ` Daniel Vetter
@ 2013-02-14 11:18       ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2013-02-14 11:18 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linaro-mm-sig, linux-arch, a.p.zijlstra, x86, linux-kernel,
	dri-devel, robclark, tglx, mingo, linux-media

On Thu, Feb 14, 2013 at 11:22 AM, Arnd Bergmann <arnd@arndb.de> wrote:
>
>>   These functions will return -EDEADLK instead of -EAGAIN if
>>   reservation_id is the same as the reservation_id that's attempted to
>>   lock the mutex with, since in that case you presumably attempted to
>>   lock the same lock twice.
>
> Since the user always has to check the return value, would it be
> possible to provide only the interruptible kind of this function
> but not the non-interruptible one? In general, interruptible locks
> are obviously harder to use, but they are much user friendlier when
> something goes wrong.

At least in drm/i915 we only use _interruptible locking on the
command-submission ioctls for all locks which could be held while
waiting for the gpu. We need unwind paths and ioctl restarting anyway
to bail out on catastrophic events like gpu hangs, so signal interrupt
handling comes for free.

Otoh in the modeset code we generally don't bother with that, since
unwinding a modeset sequence mid-way is something you don't want to do
really if your sanity is dear to you. But we also never need
mutli-object reservations in the modeset code, neither can I imagine a
future need for it.

So from my side we could drop the non-interruptible interface. But I
have not checked whether dropping this would complicate the ttm
conversion.

>> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
>>   Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
>>   This is useful when mutex_reserve_lock failed with -EAGAIN, and you
>>   unreserved all buffers so no deadlock can occur.
>
> Are these meant to be used a lot? If not, maybe prefix them with __mutex_
> instead of mutex_.

If you detect an inversion in a multi-buffer reservation you have to
drop all locks and call these functions on the buffer which failed
(that's the contention point, hence it's the right lock to sleep on).
So every place using ticket locks will also call the above slowpath
functions by necessity.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Linaro-mm-sig] [PATCH 2/3] mutex: add support for reservation style locks
  2013-02-14 10:22   ` [Linaro-mm-sig] " Arnd Bergmann
  2013-02-14 10:22     ` Arnd Bergmann
  2013-02-14 11:18     ` Daniel Vetter
@ 2013-02-14 14:19     ` Maarten Lankhorst
  2 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2013-02-14 14:19 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linaro-mm-sig, linux-kernel, linux-arch, a.p.zijlstra, x86,
	dri-devel, robclark, tglx, mingo, linux-media

Hey,

Op 14-02-13 11:22, Arnd Bergmann schreef:
> On Thursday 07 February 2013, Maarten Lankhorst wrote:
>
> Hi Maarten,
>
> I cannot help a lot on this patch set, but there are a few things that
> I noticed when reading it.
>
>> Functions:
>> ----------
>>
>> mutex_reserve_lock, and mutex_reserve_lock_interruptible:
>>   Lock a buffer with a reservation_id set. reservation_id must not be
>>   set to 0, since this is a special value that means no reservation_id.
> I think the entire description should go into a file in the Documentation
> directory, to make it easier to find without looking up the git history.
>
> For the purpose of documenting this, it feels a little strange to
> talk about "buffers" here. Obviously this is what you are using the
> locks for, but it sounds like that is not the only possible use
> case.
It is the idea it will end up in Documentation, however I had a hard time even getting people to
review the code, so I found it easier to keep code and documentation in sync by keeping it into
the commit log when I was amending things.

But yes it's the use case I use it for. The generic use case would be if you had a number of equal
locks you want to take in an arbitrary order without deadlocking or a lock protecting all those locks*.

*) In the eyes of lockdep you still take one of those locks, and nest all those locks you take into that lock.

>>   These functions will return -EDEADLK instead of -EAGAIN if
>>   reservation_id is the same as the reservation_id that's attempted to
>>   lock the mutex with, since in that case you presumably attempted to
>>   lock the same lock twice.
> Since the user always has to check the return value, would it be
> possible to provide only the interruptible kind of this function
> but not the non-interruptible one? In general, interruptible locks
> are obviously harder to use, but they are much user friendlier when
> something goes wrong.
I agree that in general you want to use the interruptible versions as much as possible,
but there are some cases in ttm where it is desirable to lock non-interruptibly.

>> mutex_reserve_lock_slow and mutex_reserve_lock_intr_slow:
>>   Similar to mutex_reserve_lock, except it won't backoff with -EAGAIN.
>>   This is useful when mutex_reserve_lock failed with -EAGAIN, and you
>>   unreserved all buffers so no deadlock can occur.
> Are these meant to be used a lot? If not, maybe prefix them with __mutex_
> instead of mutex_.
It is a common path in case of contestion. The example lock_execbuf from the commit log used
 it. When you use the mutex_reserve_lock calls, you'll have to add calls to the *_slow variants
too when those return -EAGAIN.

>> diff --git a/include/linux/mutex.h b/include/linux/mutex.h
>> index 9121595..602c247 100644
>> --- a/include/linux/mutex.h
>> +++ b/include/linux/mutex.h
>> @@ -62,6 +62,11 @@ struct mutex {
>>  #endif
>>  };
>>  
>> +struct ticket_mutex {
>> +	struct mutex base;
>> +	atomic_long_t reservation_id;
>> +};
> Have you considered changing the meaning of the "count" member
> of the mutex in the case where a ticket mutex is used? That would
> let you use an unmodified structure.
I have considered it, but I never found a good way to make that happen.
mutex_lock and mutex_unlock are currently still used when only a single
buffer needs to be locked.

Thanks for the review!

~Maarten

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-02-14 14:19 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-07 15:18 [PATCH 1/3] arch: make __mutex_fastpath_lock_retval return whether fastpath succeeded or not Maarten Lankhorst
2013-02-07 15:18 ` Maarten Lankhorst
2013-02-07 15:18 ` [PATCH 2/3] mutex: add support for reservation style locks Maarten Lankhorst
2013-02-14 10:22   ` [Linaro-mm-sig] " Arnd Bergmann
2013-02-14 10:22     ` Arnd Bergmann
2013-02-14 11:18     ` Daniel Vetter
2013-02-14 11:18       ` Daniel Vetter
2013-02-14 14:19     ` Maarten Lankhorst
2013-02-07 15:18 ` [PATCH 3/3] reservation: Add tests to lib/locking-selftest.c Maarten Lankhorst
2013-02-07 15:18   ` Maarten Lankhorst

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).