* [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
@ 2013-12-05 23:44 Nicholas Mc Guire
2013-12-06 2:19 ` Steven Rostedt
0 siblings, 1 reply; 7+ messages in thread
From: Nicholas Mc Guire @ 2013-12-05 23:44 UTC (permalink / raw)
To: linux-rt-users
Cc: Sebastian Andrzej Siewior, Steven Rostedt, Andreas Platschek,
Peter Zijlstra
pushdown of migrate_disable/enable from read_*lock* to the rt_read_*lock*
api level
general mapping to mutexes:
read_*lock*
`-> rt_read_*lock*
`-> __spin_lock (the sleeping spin locks)
`-> rt_mutex
The real read_lock* mapping:
read_lock_irqsave -.
read_lock_irq `-> rt_read_lock_irqsave()
`->read_lock ---------. \
read_lock_bh ------+ \
`--> rt_read_lock()
if (rt_mutex_owner(lock) != current){
`-> __rt_spin_lock()
rt_spin_lock_fastlock()
`->rt_mutex_cmpxchg()
migrate_disable()
}
rwlock->read_depth++;
read_trylock mapping:
read_trylock
`-> rt_read_trylock
if (rt_mutex_owner(lock) != current){
`-> rt_mutex_trylock()
rt_mutex_fasttrylock()
rt_mutex_cmpxchg()
migrate_disable()
}
rwlock->read_depth++;
read_unlock* mapping:
read_unlock_bh --------+
read_unlock_irq -------+
read_unlock_irqrestore +
read_unlock -----------+
`-> rt_read_unlock()
if(--rwlock->read_depth==0){
`-> __rt_spin_unlock()
rt_spin_lock_fastunlock()
`-> rt_mutex_cmpxchg()
migrate_disable()
}
So calls to migrate_disable/enable() are better placed at the rt_read_*
level of lock/trylock/unlock as all of the read_*lock* API has this as a
common path. In the rt_read* API of lock/trylock/unlock the nesting level
is already being recorded in rwlock->read_depth, so we can push down the
migrate disable/enable to that level and condition it on the read_depth
going from 0 to 1 -> migrate_disable and 1 to 0 -> migrate_enable. This
eliminates the recursive calls that were needed when migrate_disable/enable
was done at the read_*lock* level.
The approach to read_*_bh also eliminates the concerns raised with the
regards to api inbalances (read_lock_bh -> read_unlock+local_bh_enable)
this is on top of 3.12.1-rt4 with the first batch of migration_cleanup
patches applied.
No change of functional behavior
Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
---
include/linux/rwlock_rt.h | 6 ------
kernel/rt.c | 9 +++++----
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 853ee36..6d86c33 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -33,7 +33,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
#define read_lock_irqsave(lock, flags) \
do { \
typecheck(unsigned long, flags); \
- migrate_disable(); \
flags = rt_read_lock_irqsave(lock); \
} while (0)
@@ -46,14 +45,12 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
#define read_lock(lock) \
do { \
- migrate_disable(); \
rt_read_lock(lock); \
} while (0)
#define read_lock_bh(lock) \
do { \
local_bh_disable(); \
- migrate_disable(); \
rt_read_lock(lock); \
} while (0)
@@ -77,13 +74,11 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
#define read_unlock(lock) \
do { \
rt_read_unlock(lock); \
- migrate_enable(); \
} while (0)
#define read_unlock_bh(lock) \
do { \
rt_read_unlock(lock); \
- migrate_enable(); \
local_bh_enable(); \
} while (0)
@@ -109,7 +104,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
typecheck(unsigned long, flags); \
(void) flags; \
rt_read_unlock(lock); \
- migrate_enable(); \
} while (0)
#define write_unlock_irqrestore(lock, flags) \
diff --git a/kernel/rt.c b/kernel/rt.c
index 29771e2..3403000 100644
--- a/kernel/rt.c
+++ b/kernel/rt.c
@@ -213,19 +213,18 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock)
* but not when read_depth == 0 which means that the lock is
* write locked.
*/
- migrate_disable();
if (rt_mutex_owner(lock) != current) {
ret = rt_mutex_trylock(lock);
- if (ret)
+ if (ret) {
+ migrate_disable();
rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
+ }
} else if (!rwlock->read_depth) {
ret = 0;
}
if (ret)
rwlock->read_depth++;
- else
- migrate_enable();
return ret;
}
@@ -248,6 +247,7 @@ void __lockfunc rt_read_lock(rwlock_t *rwlock)
if (rt_mutex_owner(lock) != current) {
rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
__rt_spin_lock(lock);
+ migrate_disable();
}
rwlock->read_depth++;
}
@@ -268,6 +268,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock)
if (--rwlock->read_depth == 0) {
rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
__rt_spin_unlock(&rwlock->lock);
+ migrate_enable();
}
}
EXPORT_SYMBOL(rt_read_unlock);
--
1.7.2.5
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2013-12-05 23:44 [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock Nicholas Mc Guire
@ 2013-12-06 2:19 ` Steven Rostedt
2013-12-06 2:33 ` Nicholas Mc Guire
0 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2013-12-06 2:19 UTC (permalink / raw)
To: Nicholas Mc Guire
Cc: linux-rt-users, Sebastian Andrzej Siewior, Andreas Platschek,
Peter Zijlstra, LKML
On Fri, 6 Dec 2013 00:44:49 +0100
Nicholas Mc Guire <der.herr@hofr.at> wrote:
>
>
> pushdown of migrate_disable/enable from read_*lock* to the rt_read_*lock*
> api level
>
> general mapping to mutexes:
>
> read_*lock*
> `-> rt_read_*lock*
> `-> __spin_lock (the sleeping spin locks)
> `-> rt_mutex
>
> The real read_lock* mapping:
>
> read_lock_irqsave -.
> read_lock_irq `-> rt_read_lock_irqsave()
> `->read_lock ---------. \
> read_lock_bh ------+ \
> `--> rt_read_lock()
> if (rt_mutex_owner(lock) != current){
> `-> __rt_spin_lock()
> rt_spin_lock_fastlock()
> `->rt_mutex_cmpxchg()
> migrate_disable()
> }
> rwlock->read_depth++;
> read_trylock mapping:
>
> read_trylock
> `-> rt_read_trylock
> if (rt_mutex_owner(lock) != current){
> `-> rt_mutex_trylock()
> rt_mutex_fasttrylock()
> rt_mutex_cmpxchg()
> migrate_disable()
> }
> rwlock->read_depth++;
>
> read_unlock* mapping:
>
> read_unlock_bh --------+
> read_unlock_irq -------+
> read_unlock_irqrestore +
> read_unlock -----------+
> `-> rt_read_unlock()
> if(--rwlock->read_depth==0){
> `-> __rt_spin_unlock()
> rt_spin_lock_fastunlock()
> `-> rt_mutex_cmpxchg()
> migrate_disable()
> }
>
> So calls to migrate_disable/enable() are better placed at the rt_read_*
> level of lock/trylock/unlock as all of the read_*lock* API has this as a
> common path. In the rt_read* API of lock/trylock/unlock the nesting level
> is already being recorded in rwlock->read_depth, so we can push down the
> migrate disable/enable to that level and condition it on the read_depth
> going from 0 to 1 -> migrate_disable and 1 to 0 -> migrate_enable. This
> eliminates the recursive calls that were needed when migrate_disable/enable
> was done at the read_*lock* level.
>
> The approach to read_*_bh also eliminates the concerns raised with the
> regards to api inbalances (read_lock_bh -> read_unlock+local_bh_enable)
>
> this is on top of 3.12.1-rt4 with the first batch of migration_cleanup
> patches applied.
>
> No change of functional behavior
>
> Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
> ---
> include/linux/rwlock_rt.h | 6 ------
> kernel/rt.c | 9 +++++----
> 2 files changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
> index 853ee36..6d86c33 100644
> --- a/include/linux/rwlock_rt.h
> +++ b/include/linux/rwlock_rt.h
> @@ -33,7 +33,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> #define read_lock_irqsave(lock, flags) \
> do { \
> typecheck(unsigned long, flags); \
> - migrate_disable(); \
> flags = rt_read_lock_irqsave(lock); \
> } while (0)
>
> @@ -46,14 +45,12 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
>
> #define read_lock(lock) \
> do { \
> - migrate_disable(); \
> rt_read_lock(lock); \
> } while (0)
>
> #define read_lock_bh(lock) \
> do { \
> local_bh_disable(); \
> - migrate_disable(); \
> rt_read_lock(lock); \
> } while (0)
>
> @@ -77,13 +74,11 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> #define read_unlock(lock) \
> do { \
> rt_read_unlock(lock); \
> - migrate_enable(); \
> } while (0)
>
> #define read_unlock_bh(lock) \
> do { \
> rt_read_unlock(lock); \
> - migrate_enable(); \
> local_bh_enable(); \
> } while (0)
>
> @@ -109,7 +104,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> typecheck(unsigned long, flags); \
> (void) flags; \
> rt_read_unlock(lock); \
> - migrate_enable(); \
> } while (0)
>
> #define write_unlock_irqrestore(lock, flags) \
> diff --git a/kernel/rt.c b/kernel/rt.c
> index 29771e2..3403000 100644
> --- a/kernel/rt.c
> +++ b/kernel/rt.c
> @@ -213,19 +213,18 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock)
> * but not when read_depth == 0 which means that the lock is
> * write locked.
> */
> - migrate_disable();
> if (rt_mutex_owner(lock) != current) {
> ret = rt_mutex_trylock(lock);
> - if (ret)
> + if (ret) {
> + migrate_disable();
> rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
> + }
I like this patch in general, but I'm nervous about this part.
What was the original reason to disable migration before checking the
mutex owner? Seems like we should have only disabled it on success. I'm
assuming there was some subtle reason not to.
If there was some strange reason, I'm not convinced that your change
makes that reason go away.
-- Steve
> } else if (!rwlock->read_depth) {
> ret = 0;
> }
>
> if (ret)
> rwlock->read_depth++;
> - else
> - migrate_enable();
>
> return ret;
> }
> @@ -248,6 +247,7 @@ void __lockfunc rt_read_lock(rwlock_t *rwlock)
> if (rt_mutex_owner(lock) != current) {
> rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
> __rt_spin_lock(lock);
> + migrate_disable();
> }
> rwlock->read_depth++;
> }
> @@ -268,6 +268,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock)
> if (--rwlock->read_depth == 0) {
> rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
> __rt_spin_unlock(&rwlock->lock);
> + migrate_enable();
> }
> }
> EXPORT_SYMBOL(rt_read_unlock);
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2013-12-06 2:19 ` Steven Rostedt
@ 2013-12-06 2:33 ` Nicholas Mc Guire
2013-12-06 15:25 ` Steven Rostedt
0 siblings, 1 reply; 7+ messages in thread
From: Nicholas Mc Guire @ 2013-12-06 2:33 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-rt-users, Sebastian Andrzej Siewior, Andreas Platschek,
Peter Zijlstra, LKML
On Thu, 05 Dec 2013, Steven Rostedt wrote:
> On Fri, 6 Dec 2013 00:44:49 +0100
> Nicholas Mc Guire <der.herr@hofr.at> wrote:
>
> >
> >
> > pushdown of migrate_disable/enable from read_*lock* to the rt_read_*lock*
> > api level
> >
> > general mapping to mutexes:
> >
> > read_*lock*
> > `-> rt_read_*lock*
> > `-> __spin_lock (the sleeping spin locks)
> > `-> rt_mutex
> >
> > The real read_lock* mapping:
> >
> > read_lock_irqsave -.
> > read_lock_irq `-> rt_read_lock_irqsave()
> > `->read_lock ---------. \
> > read_lock_bh ------+ \
> > `--> rt_read_lock()
> > if (rt_mutex_owner(lock) != current){
> > `-> __rt_spin_lock()
> > rt_spin_lock_fastlock()
> > `->rt_mutex_cmpxchg()
> > migrate_disable()
> > }
> > rwlock->read_depth++;
> > read_trylock mapping:
> >
> > read_trylock
> > `-> rt_read_trylock
> > if (rt_mutex_owner(lock) != current){
> > `-> rt_mutex_trylock()
> > rt_mutex_fasttrylock()
> > rt_mutex_cmpxchg()
> > migrate_disable()
> > }
> > rwlock->read_depth++;
> >
> > read_unlock* mapping:
> >
> > read_unlock_bh --------+
> > read_unlock_irq -------+
> > read_unlock_irqrestore +
> > read_unlock -----------+
> > `-> rt_read_unlock()
> > if(--rwlock->read_depth==0){
> > `-> __rt_spin_unlock()
> > rt_spin_lock_fastunlock()
> > `-> rt_mutex_cmpxchg()
> > migrate_disable()
> > }
> >
> > So calls to migrate_disable/enable() are better placed at the rt_read_*
> > level of lock/trylock/unlock as all of the read_*lock* API has this as a
> > common path. In the rt_read* API of lock/trylock/unlock the nesting level
> > is already being recorded in rwlock->read_depth, so we can push down the
> > migrate disable/enable to that level and condition it on the read_depth
> > going from 0 to 1 -> migrate_disable and 1 to 0 -> migrate_enable. This
> > eliminates the recursive calls that were needed when migrate_disable/enable
> > was done at the read_*lock* level.
> >
> > The approach to read_*_bh also eliminates the concerns raised with the
> > regards to api inbalances (read_lock_bh -> read_unlock+local_bh_enable)
> >
> > this is on top of 3.12.1-rt4 with the first batch of migration_cleanup
> > patches applied.
> >
> > No change of functional behavior
> >
> > Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
> > ---
> > include/linux/rwlock_rt.h | 6 ------
> > kernel/rt.c | 9 +++++----
> > 2 files changed, 5 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
> > index 853ee36..6d86c33 100644
> > --- a/include/linux/rwlock_rt.h
> > +++ b/include/linux/rwlock_rt.h
> > @@ -33,7 +33,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> > #define read_lock_irqsave(lock, flags) \
> > do { \
> > typecheck(unsigned long, flags); \
> > - migrate_disable(); \
> > flags = rt_read_lock_irqsave(lock); \
> > } while (0)
> >
> > @@ -46,14 +45,12 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> >
> > #define read_lock(lock) \
> > do { \
> > - migrate_disable(); \
> > rt_read_lock(lock); \
> > } while (0)
> >
> > #define read_lock_bh(lock) \
> > do { \
> > local_bh_disable(); \
> > - migrate_disable(); \
> > rt_read_lock(lock); \
> > } while (0)
> >
> > @@ -77,13 +74,11 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> > #define read_unlock(lock) \
> > do { \
> > rt_read_unlock(lock); \
> > - migrate_enable(); \
> > } while (0)
> >
> > #define read_unlock_bh(lock) \
> > do { \
> > rt_read_unlock(lock); \
> > - migrate_enable(); \
> > local_bh_enable(); \
> > } while (0)
> >
> > @@ -109,7 +104,6 @@ extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key
> > typecheck(unsigned long, flags); \
> > (void) flags; \
> > rt_read_unlock(lock); \
> > - migrate_enable(); \
> > } while (0)
> >
> > #define write_unlock_irqrestore(lock, flags) \
> > diff --git a/kernel/rt.c b/kernel/rt.c
> > index 29771e2..3403000 100644
> > --- a/kernel/rt.c
> > +++ b/kernel/rt.c
> > @@ -213,19 +213,18 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock)
> > * but not when read_depth == 0 which means that the lock is
> > * write locked.
> > */
> > - migrate_disable();
> > if (rt_mutex_owner(lock) != current) {
> > ret = rt_mutex_trylock(lock);
> > - if (ret)
> > + if (ret) {
> > + migrate_disable();
> > rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
> > + }
>
> I like this patch in general, but I'm nervous about this part.
>
> What was the original reason to disable migration before checking the
> mutex owner? Seems like we should have only disabled it on success. I'm
> assuming there was some subtle reason not to.
>
I think that it was simply being treated semantically as a lock - but it
is actually not.
hold a
migrate_disable/lock per_cpu unlock/migrate_enable
lock/migrate_disable per_cpu migrate_enable/unlcok
lock/migrate_disable per_cpu unlock/migrate_enable
migrate_disable/lock per_cpu migrate_enable/unlcok
reference
are all equivalent - the only thing you need to ensure that the per cpu
object will not be accessed before both lock and migration_disable have
been sucessful. So from my understanding this is safe.
if we get migrated after a succesful trylock what would go wrong ?
the protected object will not be accessed until after the spin_trylock
returned so migration is disabled
> If there was some strange reason, I'm not convinced that your change
> makes that reason go away.
>
IF there is a reason then this patch is broken - my conclusion up to now
is that there is no such reason.
thx!
hofrat
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2013-12-06 2:33 ` Nicholas Mc Guire
@ 2013-12-06 15:25 ` Steven Rostedt
2013-12-15 15:15 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2013-12-06 15:25 UTC (permalink / raw)
To: Nicholas Mc Guire
Cc: linux-rt-users, Sebastian Andrzej Siewior, Andreas Platschek,
Peter Zijlstra, LKML
On Fri, 6 Dec 2013 03:33:34 +0100
Nicholas Mc Guire <der.herr@hofr.at> wrote:
> > > - migrate_disable();
> > > if (rt_mutex_owner(lock) != current) {
> > > ret = rt_mutex_trylock(lock);
> > > - if (ret)
> > > + if (ret) {
> > > + migrate_disable();
> > > rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
> > > + }
> >
> > I like this patch in general, but I'm nervous about this part.
> >
> > What was the original reason to disable migration before checking the
> > mutex owner? Seems like we should have only disabled it on success. I'm
> > assuming there was some subtle reason not to.
> >
>
> I think that it was simply being treated semantically as a lock - but it
> is actually not.
> hold a
> migrate_disable/lock per_cpu unlock/migrate_enable
> lock/migrate_disable per_cpu migrate_enable/unlcok
> lock/migrate_disable per_cpu unlock/migrate_enable
> migrate_disable/lock per_cpu migrate_enable/unlcok
> reference
>
> are all equivalent - the only thing you need to ensure that the per cpu
> object will not be accessed before both lock and migration_disable have
> been sucessful. So from my understanding this is safe.
I think you may be right, but I'm still a little nervous about this
code. But that's good. We all should be nervous about any locking
code ;-)
>
> if we get migrated after a succesful trylock what would go wrong ?
> the protected object will not be accessed until after the spin_trylock
> returned so migration is disabled
I agree.
>
> > If there was some strange reason, I'm not convinced that your change
> > makes that reason go away.
> >
> IF there is a reason then this patch is broken - my conclusion up to now
> is that there is no such reason.
>
Let me analyze the original code first. I'll poke peterz and tglx too
to make sure this modification is OK.
Thanks,
-- Steve
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2013-12-06 15:25 ` Steven Rostedt
@ 2013-12-15 15:15 ` Sebastian Andrzej Siewior
2014-01-24 12:42 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-12-15 15:15 UTC (permalink / raw)
To: Steven Rostedt
Cc: Nicholas Mc Guire, linux-rt-users, Andreas Platschek,
Peter Zijlstra, LKML
* Steven Rostedt | 2013-12-06 10:25:32 [-0500]:
>
>Let me analyze the original code first. I'll poke peterz and tglx too
>to make sure this modification is OK.
I took 1/3. I postpone the remaining two until I hear something from
you.
>Thanks,
>
>-- Steve
Sebastian
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2013-12-15 15:15 ` Sebastian Andrzej Siewior
@ 2014-01-24 12:42 ` Sebastian Andrzej Siewior
2014-01-25 8:29 ` Nicholas Mc Guire
0 siblings, 1 reply; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2014-01-24 12:42 UTC (permalink / raw)
To: Steven Rostedt
Cc: Nicholas Mc Guire, linux-rt-users, Andreas Platschek,
Peter Zijlstra, LKML
* Sebastian Andrzej Siewior | 2013-12-15 16:15:11 [+0100]:
>* Steven Rostedt | 2013-12-06 10:25:32 [-0500]:
>
>>
>>Let me analyze the original code first. I'll poke peterz and tglx too
>>to make sure this modification is OK.
>
>I took 1/3. I postpone the remaining two until I hear something from
>you.
Any news?
Sebastian
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock
2014-01-24 12:42 ` Sebastian Andrzej Siewior
@ 2014-01-25 8:29 ` Nicholas Mc Guire
0 siblings, 0 replies; 7+ messages in thread
From: Nicholas Mc Guire @ 2014-01-25 8:29 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: Steven Rostedt, linux-rt-users, Andreas Platschek, Peter Zijlstra,
LKML, Carsten Emde
On Fri, 24 Jan 2014, Sebastian Andrzej Siewior wrote:
> * Sebastian Andrzej Siewior | 2013-12-15 16:15:11 [+0100]:
>
> >* Steven Rostedt | 2013-12-06 10:25:32 [-0500]:
> >
> >>
> >>Let me analyze the original code first. I'll poke peterz and tglx too
> >>to make sure this modification is OK.
> >
> >I took 1/3. I postpone the remaining two until I hear something from
> >you.
>
> Any news?
>
Carsten Emde added the two patches
0001-write_lock-migrate_disable-pushdown-to-rt_write_lock.patch
0002-read_lock-migrate_disable-pushdown-to-rt_read_lock.patch
to the -rt9 test on one of his ARM boards (i.MX6 Sabre automation board)
in the OSADL QA-Farm (rack #9/slot #8)
<snip Carsten Emde>
No complaints so far after adding your patches.
If you would like to follow up:
- Profile: https://www.osadl.org/?id=1822
- Patches: https://www.osadl.org/?id=1822#patches
- Latency plot: https://www.osadl.org/?id=1823
- Munin:
https://www.osadl.org/munin/osadl.org/rack9slot8.osadl.org/index.html
<snip>
so it atleast seems to be passing basic testing.
I have an AMD Phenom X4 and Intel i3 running -rt9 with those two patches
applied running stable now for 2 days under alternating load/idle conditions
as well. Unfortunately no further feedback up to now.
Given that its a low-level locking issue I guess that more testing is of little
help and code-reviews would be essential. Notably if any cases could exist
where the order of preempt_disable/lock critical-section unlock/preempt_enable
actually does natter.
thx!
hofrat
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-01-25 8:29 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-05 23:44 [PATCH 2/3] read_lock migrate_disable pushdown to rt_read_lock Nicholas Mc Guire
2013-12-06 2:19 ` Steven Rostedt
2013-12-06 2:33 ` Nicholas Mc Guire
2013-12-06 15:25 ` Steven Rostedt
2013-12-15 15:15 ` Sebastian Andrzej Siewior
2014-01-24 12:42 ` Sebastian Andrzej Siewior
2014-01-25 8:29 ` Nicholas Mc Guire
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).