* [Qemu-devel] [PATCH] trace: add qemu mutex lock and unlock trace events
@ 2017-04-24 14:28 Jose Ricardo Ziviani
2017-04-24 14:45 ` Fam Zheng
0 siblings, 1 reply; 3+ messages in thread
From: Jose Ricardo Ziviani @ 2017-04-24 14:28 UTC (permalink / raw)
To: qemu-devel; +Cc: stefanha, pbonzini
These trace events were very useful to help me to understand and find a
reordering issue in vfio, for example:
qemu_mutex_lock locked mutex 0x10905ad8
vfio_region_write (0001:03:00.0:region1+0xc0, 0x2020c, 4)
qemu_mutex_unlock unlocked mutex 0x10905ad8
qemu_mutex_lock locked mutex 0x10905ad8
vfio_region_write (0001:03:00.0:region1+0xc4, 0xa0000, 4)
qemu_mutex_unlock unlocked mutex 0x10905ad8
that also helped to see desired result after the fix:
qemu_mutex_lock locked mutex 0x10905ad8
vfio_region_write (0001:03:00.0:region1+0xc0, 0x2000c, 4)
vfio_region_write (0001:03:00.0:region1+0xc4, 0xb0000, 4)
qemu_mutex_unlock unlocked mutex 0x10905ad8
So it could be a good idea to have these traces implemented. It's worth
mentioning that they should be surgically enabled during the debugging,
otherwise it'd flood the trace logs with lock/unlock messages.
How to use it:
trace-event qemu_mutex_lock on|off
trace-event qemu_mutex_unlock on|off
or
trace-event qemu_mutex* on|off
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
---
util/qemu-thread-posix.c | 5 +++++
util/trace-events | 4 ++++
2 files changed, 9 insertions(+)
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 73e3a0e..909c2ac 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -14,6 +14,7 @@
#include "qemu/thread.h"
#include "qemu/atomic.h"
#include "qemu/notify.h"
+#include "trace.h"
static bool name_threads;
@@ -60,6 +61,8 @@ void qemu_mutex_lock(QemuMutex *mutex)
err = pthread_mutex_lock(&mutex->lock);
if (err)
error_exit(err, __func__);
+
+ trace_qemu_mutex_lock((void *)&mutex->lock);
}
int qemu_mutex_trylock(QemuMutex *mutex)
@@ -74,6 +77,8 @@ void qemu_mutex_unlock(QemuMutex *mutex)
err = pthread_mutex_unlock(&mutex->lock);
if (err)
error_exit(err, __func__);
+
+ trace_qemu_mutex_unlock((void *)&mutex->lock);
}
void qemu_rec_mutex_init(QemuRecMutex *mutex)
diff --git a/util/trace-events b/util/trace-events
index b44ef4f..65c33fe 100644
--- a/util/trace-events
+++ b/util/trace-events
@@ -55,3 +55,7 @@ lockcnt_futex_wait_prepare(const void *lockcnt, int expected, int new) "lockcnt
lockcnt_futex_wait(const void *lockcnt, int val) "lockcnt %p waiting on %d"
lockcnt_futex_wait_resume(const void *lockcnt, int new) "lockcnt %p after wait: %d"
lockcnt_futex_wake(const void *lockcnt) "lockcnt %p waking up one waiter"
+
+# util/qemu-thread-posix.c
+qemu_mutex_lock(void *qemu_global_mutex) "locked mutex %p"
+qemu_mutex_unlock(void *qemu_global_mutex) "unlocked mutex %p"
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH] trace: add qemu mutex lock and unlock trace events
2017-04-24 14:28 [Qemu-devel] [PATCH] trace: add qemu mutex lock and unlock trace events Jose Ricardo Ziviani
@ 2017-04-24 14:45 ` Fam Zheng
2017-04-24 16:59 ` joserz
0 siblings, 1 reply; 3+ messages in thread
From: Fam Zheng @ 2017-04-24 14:45 UTC (permalink / raw)
To: Jose Ricardo Ziviani; +Cc: qemu-devel, pbonzini, stefanha
On Mon, 04/24 11:28, Jose Ricardo Ziviani wrote:
> These trace events were very useful to help me to understand and find a
> reordering issue in vfio, for example:
>
> qemu_mutex_lock locked mutex 0x10905ad8
> vfio_region_write (0001:03:00.0:region1+0xc0, 0x2020c, 4)
> qemu_mutex_unlock unlocked mutex 0x10905ad8
> qemu_mutex_lock locked mutex 0x10905ad8
> vfio_region_write (0001:03:00.0:region1+0xc4, 0xa0000, 4)
> qemu_mutex_unlock unlocked mutex 0x10905ad8
>
> that also helped to see desired result after the fix:
>
> qemu_mutex_lock locked mutex 0x10905ad8
> vfio_region_write (0001:03:00.0:region1+0xc0, 0x2000c, 4)
> vfio_region_write (0001:03:00.0:region1+0xc4, 0xb0000, 4)
> qemu_mutex_unlock unlocked mutex 0x10905ad8
>
> So it could be a good idea to have these traces implemented. It's worth
> mentioning that they should be surgically enabled during the debugging,
> otherwise it'd flood the trace logs with lock/unlock messages.
>
> How to use it:
> trace-event qemu_mutex_lock on|off
> trace-event qemu_mutex_unlock on|off
> or
> trace-event qemu_mutex* on|off
>
> Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
> ---
> util/qemu-thread-posix.c | 5 +++++
> util/trace-events | 4 ++++
> 2 files changed, 9 insertions(+)
>
> diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> index 73e3a0e..909c2ac 100644
> --- a/util/qemu-thread-posix.c
> +++ b/util/qemu-thread-posix.c
> @@ -14,6 +14,7 @@
> #include "qemu/thread.h"
> #include "qemu/atomic.h"
> #include "qemu/notify.h"
> +#include "trace.h"
>
> static bool name_threads;
>
> @@ -60,6 +61,8 @@ void qemu_mutex_lock(QemuMutex *mutex)
> err = pthread_mutex_lock(&mutex->lock);
> if (err)
> error_exit(err, __func__);
> +
> + trace_qemu_mutex_lock((void *)&mutex->lock);
You don't need these casts as the parameter type is void * which accepts any
pointers.
> }
>
> int qemu_mutex_trylock(QemuMutex *mutex)
> @@ -74,6 +77,8 @@ void qemu_mutex_unlock(QemuMutex *mutex)
> err = pthread_mutex_unlock(&mutex->lock);
> if (err)
> error_exit(err, __func__);
> +
> + trace_qemu_mutex_unlock((void *)&mutex->lock);
> }
>
> void qemu_rec_mutex_init(QemuRecMutex *mutex)
> diff --git a/util/trace-events b/util/trace-events
> index b44ef4f..65c33fe 100644
> --- a/util/trace-events
> +++ b/util/trace-events
> @@ -55,3 +55,7 @@ lockcnt_futex_wait_prepare(const void *lockcnt, int expected, int new) "lockcnt
> lockcnt_futex_wait(const void *lockcnt, int val) "lockcnt %p waiting on %d"
> lockcnt_futex_wait_resume(const void *lockcnt, int new) "lockcnt %p after wait: %d"
> lockcnt_futex_wake(const void *lockcnt) "lockcnt %p waking up one waiter"
> +
> +# util/qemu-thread-posix.c
> +qemu_mutex_lock(void *qemu_global_mutex) "locked mutex %p"
> +qemu_mutex_unlock(void *qemu_global_mutex) "unlocked mutex %p"
Parameter name is slightly misleading, maybe s/qemu_global_mutex/lock/ for both
lines?
Fam
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH] trace: add qemu mutex lock and unlock trace events
2017-04-24 14:45 ` Fam Zheng
@ 2017-04-24 16:59 ` joserz
0 siblings, 0 replies; 3+ messages in thread
From: joserz @ 2017-04-24 16:59 UTC (permalink / raw)
To: Fam Zheng; +Cc: qemu-devel, pbonzini, stefanha
On Mon, Apr 24, 2017 at 10:45:52PM +0800, Fam Zheng wrote:
> On Mon, 04/24 11:28, Jose Ricardo Ziviani wrote:
> > These trace events were very useful to help me to understand and find a
> > reordering issue in vfio, for example:
> >
> > qemu_mutex_lock locked mutex 0x10905ad8
> > vfio_region_write (0001:03:00.0:region1+0xc0, 0x2020c, 4)
> > qemu_mutex_unlock unlocked mutex 0x10905ad8
> > qemu_mutex_lock locked mutex 0x10905ad8
> > vfio_region_write (0001:03:00.0:region1+0xc4, 0xa0000, 4)
> > qemu_mutex_unlock unlocked mutex 0x10905ad8
> >
> > that also helped to see desired result after the fix:
> >
> > qemu_mutex_lock locked mutex 0x10905ad8
> > vfio_region_write (0001:03:00.0:region1+0xc0, 0x2000c, 4)
> > vfio_region_write (0001:03:00.0:region1+0xc4, 0xb0000, 4)
> > qemu_mutex_unlock unlocked mutex 0x10905ad8
> >
> > So it could be a good idea to have these traces implemented. It's worth
> > mentioning that they should be surgically enabled during the debugging,
> > otherwise it'd flood the trace logs with lock/unlock messages.
> >
> > How to use it:
> > trace-event qemu_mutex_lock on|off
> > trace-event qemu_mutex_unlock on|off
> > or
> > trace-event qemu_mutex* on|off
> >
> > Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
> > ---
> > util/qemu-thread-posix.c | 5 +++++
> > util/trace-events | 4 ++++
> > 2 files changed, 9 insertions(+)
> >
> > diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> > index 73e3a0e..909c2ac 100644
> > --- a/util/qemu-thread-posix.c
> > +++ b/util/qemu-thread-posix.c
> > @@ -14,6 +14,7 @@
> > #include "qemu/thread.h"
> > #include "qemu/atomic.h"
> > #include "qemu/notify.h"
> > +#include "trace.h"
> >
> > static bool name_threads;
> >
> > @@ -60,6 +61,8 @@ void qemu_mutex_lock(QemuMutex *mutex)
> > err = pthread_mutex_lock(&mutex->lock);
> > if (err)
> > error_exit(err, __func__);
> > +
> > + trace_qemu_mutex_lock((void *)&mutex->lock);
>
> You don't need these casts as the parameter type is void * which accepts any
> pointers.
OK
>
> > }
> >
> > int qemu_mutex_trylock(QemuMutex *mutex)
> > @@ -74,6 +77,8 @@ void qemu_mutex_unlock(QemuMutex *mutex)
> > err = pthread_mutex_unlock(&mutex->lock);
> > if (err)
> > error_exit(err, __func__);
> > +
> > + trace_qemu_mutex_unlock((void *)&mutex->lock);
> > }
> >
> > void qemu_rec_mutex_init(QemuRecMutex *mutex)
> > diff --git a/util/trace-events b/util/trace-events
> > index b44ef4f..65c33fe 100644
> > --- a/util/trace-events
> > +++ b/util/trace-events
> > @@ -55,3 +55,7 @@ lockcnt_futex_wait_prepare(const void *lockcnt, int expected, int new) "lockcnt
> > lockcnt_futex_wait(const void *lockcnt, int val) "lockcnt %p waiting on %d"
> > lockcnt_futex_wait_resume(const void *lockcnt, int new) "lockcnt %p after wait: %d"
> > lockcnt_futex_wake(const void *lockcnt) "lockcnt %p waking up one waiter"
> > +
> > +# util/qemu-thread-posix.c
> > +qemu_mutex_lock(void *qemu_global_mutex) "locked mutex %p"
> > +qemu_mutex_unlock(void *qemu_global_mutex) "unlocked mutex %p"
>
> Parameter name is slightly misleading, maybe s/qemu_global_mutex/lock/ for both
> lines?
Great! I'll change it and send a v2.
Thank for your review!
>
> Fam
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-04-24 17:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-04-24 14:28 [Qemu-devel] [PATCH] trace: add qemu mutex lock and unlock trace events Jose Ricardo Ziviani
2017-04-24 14:45 ` Fam Zheng
2017-04-24 16:59 ` joserz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).