* [Qemu-devel] [PATCH 1/4] seqlock: introduce read-write seqlock
2013-08-05 7:33 [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Liu Ping Fan
@ 2013-08-05 7:33 ` Liu Ping Fan
2013-08-05 7:33 ` [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock Liu Ping Fan
` (3 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Liu Ping Fan @ 2013-08-05 7:33 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
From: Paolo Bonzini <pbonzini@redhat.com>
This lets the read-side access run outside the BQL.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/qemu/seqlock.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
create mode 100644 include/qemu/seqlock.h
diff --git a/include/qemu/seqlock.h b/include/qemu/seqlock.h
new file mode 100644
index 0000000..8f1c89f
--- /dev/null
+++ b/include/qemu/seqlock.h
@@ -0,0 +1,72 @@
+/*
+ * Seqlock implementation for QEMU
+ *
+ * Copyright Red Hat, Inc. 2013
+ *
+ * Author:
+ * Paolo Bonzini <pbonzini@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#ifndef QEMU_SEQLOCK_H
+#define QEMU_SEQLOCK_H 1
+
+#include <qemu/atomic.h>
+#include <qemu/thread.h>
+
+typedef struct QemuSeqLock QemuSeqLock;
+
+struct QemuSeqLock {
+ QemuMutex *mutex;
+ unsigned sequence;
+};
+
+static inline void seqlock_init(QemuSeqLock *sl, QemuMutex *mutex)
+{
+ sl->mutex = mutex;
+ sl->sequence = 0;
+}
+
+/* Lock out other writers and update the count. */
+static inline void seqlock_write_lock(QemuSeqLock *sl)
+{
+ if (sl->mutex) {
+ qemu_mutex_lock(sl->mutex);
+ }
+ ++sl->sequence;
+
+ /* Write sequence before updating other fields. */
+ smp_wmb();
+}
+
+static inline void seqlock_write_unlock(QemuSeqLock *sl)
+{
+ /* Write other fields before finalizing sequence. */
+ smp_wmb();
+
+ ++sl->sequence;
+ if (sl->mutex) {
+ qemu_mutex_unlock(sl->mutex);
+ }
+}
+
+static inline unsigned seqlock_read_begin(QemuSeqLock *sl)
+{
+ /* Always fail if a write is in progress. */
+ unsigned ret = sl->sequence & ~1;
+
+ /* Read sequence before reading other fields. */
+ smp_rmb();
+ return ret;
+}
+
+static int seqlock_read_check(const QemuSeqLock *sl, unsigned start)
+{
+ /* Read other fields before reading final sequence. */
+ smp_rmb();
+ return unlikely(sl->sequence != start);
+}
+
+#endif
--
1.8.1.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-05 7:33 [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Liu Ping Fan
2013-08-05 7:33 ` [Qemu-devel] [PATCH 1/4] seqlock: introduce read-write seqlock Liu Ping Fan
@ 2013-08-05 7:33 ` Liu Ping Fan
2013-08-05 13:29 ` Paolo Bonzini
2013-08-06 9:30 ` Stefan Hajnoczi
2013-08-05 7:33 ` [Qemu-devel] [PATCH 3/4] qemu-thread: add QemuEvent Liu Ping Fan
` (2 subsequent siblings)
4 siblings, 2 replies; 14+ messages in thread
From: Liu Ping Fan @ 2013-08-05 7:33 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
In kvm mode, vm_clock may be read outside BQL. This will make
timers_state --the foundation of vm_clock exposed to race condition.
Using private lock to protect it.
Note in tcg mode, vm_clock still read inside BQL, so icount is
left without change. As for cpu_ticks in timers_state, it
is still protected by BQL.
Lock rule: private lock innermost, ie BQL->"this lock"
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
cpus.c | 36 +++++++++++++++++++++++++++++-------
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/cpus.c b/cpus.c
index 85e743d..ab92db9 100644
--- a/cpus.c
+++ b/cpus.c
@@ -107,12 +107,17 @@ static int64_t qemu_icount;
typedef struct TimersState {
int64_t cpu_ticks_prev;
int64_t cpu_ticks_offset;
+ /* QemuClock will be read out of BQL, so protect is with private lock.
+ * As for cpu_ticks, no requirement to read it outside BQL.
+ * Lock rule: innermost
+ */
+ QemuSeqLock clock_seqlock;
int64_t cpu_clock_offset;
int32_t cpu_ticks_enabled;
int64_t dummy;
} TimersState;
-TimersState timers_state;
+static TimersState timers_state;
/* Return the virtual CPU time, based on the instruction counter. */
int64_t cpu_get_icount(void)
@@ -132,6 +137,7 @@ int64_t cpu_get_icount(void)
}
/* return the host CPU cycle counter and handle stop/restart */
+/* cpu_ticks is safely if holding BQL */
int64_t cpu_get_ticks(void)
{
if (use_icount) {
@@ -156,33 +162,46 @@ int64_t cpu_get_ticks(void)
int64_t cpu_get_clock(void)
{
int64_t ti;
- if (!timers_state.cpu_ticks_enabled) {
- return timers_state.cpu_clock_offset;
- } else {
- ti = get_clock();
- return ti + timers_state.cpu_clock_offset;
- }
+ unsigned start;
+
+ do {
+ start = seqlock_read_begin(&timers_state.clock_seqlock);
+ if (!timers_state.cpu_ticks_enabled) {
+ ti = timers_state.cpu_clock_offset;
+ } else {
+ ti = get_clock();
+ ti += timers_state.cpu_clock_offset;
+ }
+ } while (seqlock_read_check(&timers_state.clock_seqlock, start);
+
+ return ti;
}
/* enable cpu_get_ticks() */
void cpu_enable_ticks(void)
{
+ /* Here, the really thing protected by seqlock is cpu_clock. */
+ seqlock_write_lock(&timers_state.clock_seqlock);
if (!timers_state.cpu_ticks_enabled) {
timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
timers_state.cpu_clock_offset -= get_clock();
timers_state.cpu_ticks_enabled = 1;
}
+ seqlock_write_unlock(&timers_state.clock_seqlock);
}
/* disable cpu_get_ticks() : the clock is stopped. You must not call
cpu_get_ticks() after that. */
void cpu_disable_ticks(void)
{
+ /* Here, the really thing protected by seqlock is cpu_clock. */
+ seqlock_write_lock(&timers_state.clock_seqlock);
if (timers_state.cpu_ticks_enabled) {
timers_state.cpu_ticks_offset = cpu_get_ticks();
timers_state.cpu_clock_offset = cpu_get_clock();
timers_state.cpu_ticks_enabled = 0;
}
+ seqlock_write_unlock(&timers_state.clock_seqlock);
}
/* Correlation between real and virtual time is always going to be
@@ -364,6 +383,9 @@ static const VMStateDescription vmstate_timers = {
void configure_icount(const char *option)
{
+ QemuMutex *mutex = g_malloc0(sizeof(QemuMutex));
+ qemu_mutex_init(mutex);
+ seqlock_init(&timers_state.clock_seqlock, mutex);
vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
if (!option) {
return;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-05 7:33 ` [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock Liu Ping Fan
@ 2013-08-05 13:29 ` Paolo Bonzini
2013-08-06 5:58 ` liu ping fan
2013-08-06 9:30 ` Stefan Hajnoczi
1 sibling, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2013-08-05 13:29 UTC (permalink / raw)
To: Liu Ping Fan
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
MORITA Kazutaka
> In kvm mode, vm_clock may be read outside BQL.
Not just in KVM mode (we will be able to use dataplane with TCG sooner
or later), actually.
Otherwise looks good!
Paolo
> This will make
> timers_state --the foundation of vm_clock exposed to race condition.
> Using private lock to protect it.
>
> Note in tcg mode, vm_clock still read inside BQL, so icount is
> left without change. As for cpu_ticks in timers_state, it
> is still protected by BQL.
>
> Lock rule: private lock innermost, ie BQL->"this lock"
>
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
> cpus.c | 36 +++++++++++++++++++++++++++++-------
> 1 file changed, 29 insertions(+), 7 deletions(-)
>
> diff --git a/cpus.c b/cpus.c
> index 85e743d..ab92db9 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -107,12 +107,17 @@ static int64_t qemu_icount;
> typedef struct TimersState {
> int64_t cpu_ticks_prev;
> int64_t cpu_ticks_offset;
> + /* QemuClock will be read out of BQL, so protect is with private lock.
> + * As for cpu_ticks, no requirement to read it outside BQL.
> + * Lock rule: innermost
> + */
> + QemuSeqLock clock_seqlock;
> int64_t cpu_clock_offset;
> int32_t cpu_ticks_enabled;
> int64_t dummy;
> } TimersState;
>
> -TimersState timers_state;
> +static TimersState timers_state;
>
> /* Return the virtual CPU time, based on the instruction counter. */
> int64_t cpu_get_icount(void)
> @@ -132,6 +137,7 @@ int64_t cpu_get_icount(void)
> }
>
> /* return the host CPU cycle counter and handle stop/restart */
> +/* cpu_ticks is safely if holding BQL */
> int64_t cpu_get_ticks(void)
> {
> if (use_icount) {
> @@ -156,33 +162,46 @@ int64_t cpu_get_ticks(void)
> int64_t cpu_get_clock(void)
> {
> int64_t ti;
> - if (!timers_state.cpu_ticks_enabled) {
> - return timers_state.cpu_clock_offset;
> - } else {
> - ti = get_clock();
> - return ti + timers_state.cpu_clock_offset;
> - }
> + unsigned start;
> +
> + do {
> + start = seqlock_read_begin(&timers_state.clock_seqlock);
> + if (!timers_state.cpu_ticks_enabled) {
> + ti = timers_state.cpu_clock_offset;
> + } else {
> + ti = get_clock();
> + ti += timers_state.cpu_clock_offset;
> + }
> + } while (seqlock_read_check(&timers_state.clock_seqlock, start);
> +
> + return ti;
> }
>
> /* enable cpu_get_ticks() */
> void cpu_enable_ticks(void)
> {
> + /* Here, the really thing protected by seqlock is cpu_clock. */
> + seqlock_write_lock(&timers_state.clock_seqlock);
> if (!timers_state.cpu_ticks_enabled) {
> timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
> timers_state.cpu_clock_offset -= get_clock();
> timers_state.cpu_ticks_enabled = 1;
> }
> + seqlock_write_unlock(&timers_state.clock_seqlock);
> }
>
> /* disable cpu_get_ticks() : the clock is stopped. You must not call
> cpu_get_ticks() after that. */
> void cpu_disable_ticks(void)
> {
> + /* Here, the really thing protected by seqlock is cpu_clock. */
> + seqlock_write_lock(&timers_state.clock_seqlock);
> if (timers_state.cpu_ticks_enabled) {
> timers_state.cpu_ticks_offset = cpu_get_ticks();
> timers_state.cpu_clock_offset = cpu_get_clock();
> timers_state.cpu_ticks_enabled = 0;
> }
> + seqlock_write_unlock(&timers_state.clock_seqlock);
> }
>
> /* Correlation between real and virtual time is always going to be
> @@ -364,6 +383,9 @@ static const VMStateDescription vmstate_timers = {
>
> void configure_icount(const char *option)
> {
> + QemuMutex *mutex = g_malloc0(sizeof(QemuMutex));
> + qemu_mutex_init(mutex);
> + seqlock_init(&timers_state.clock_seqlock, mutex);
> vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
> if (!option) {
> return;
> --
> 1.8.1.4
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-05 13:29 ` Paolo Bonzini
@ 2013-08-06 5:58 ` liu ping fan
2013-08-06 7:31 ` Paolo Bonzini
0 siblings, 1 reply; 14+ messages in thread
From: liu ping fan @ 2013-08-06 5:58 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
MORITA Kazutaka
On Mon, Aug 5, 2013 at 9:29 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>> In kvm mode, vm_clock may be read outside BQL.
>
> Not just in KVM mode (we will be able to use dataplane with TCG sooner
> or later), actually.
>
Oh. But this patch does not fix cpu_get_icount()'s thread-safe issue.
So currently, could I just change the commit log instead of fixing it?
Regards,
Pingfan
> Otherwise looks good!
>
> Paolo
>
>> This will make
>> timers_state --the foundation of vm_clock exposed to race condition.
>> Using private lock to protect it.
>>
>> Note in tcg mode, vm_clock still read inside BQL, so icount is
>> left without change. As for cpu_ticks in timers_state, it
>> is still protected by BQL.
>>
>> Lock rule: private lock innermost, ie BQL->"this lock"
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>> ---
>> cpus.c | 36 +++++++++++++++++++++++++++++-------
>> 1 file changed, 29 insertions(+), 7 deletions(-)
>>
>> diff --git a/cpus.c b/cpus.c
>> index 85e743d..ab92db9 100644
>> --- a/cpus.c
>> +++ b/cpus.c
>> @@ -107,12 +107,17 @@ static int64_t qemu_icount;
>> typedef struct TimersState {
>> int64_t cpu_ticks_prev;
>> int64_t cpu_ticks_offset;
>> + /* QemuClock will be read out of BQL, so protect is with private lock.
>> + * As for cpu_ticks, no requirement to read it outside BQL.
>> + * Lock rule: innermost
>> + */
>> + QemuSeqLock clock_seqlock;
>> int64_t cpu_clock_offset;
>> int32_t cpu_ticks_enabled;
>> int64_t dummy;
>> } TimersState;
>>
>> -TimersState timers_state;
>> +static TimersState timers_state;
>>
>> /* Return the virtual CPU time, based on the instruction counter. */
>> int64_t cpu_get_icount(void)
>> @@ -132,6 +137,7 @@ int64_t cpu_get_icount(void)
>> }
>>
>> /* return the host CPU cycle counter and handle stop/restart */
>> +/* cpu_ticks is safely if holding BQL */
>> int64_t cpu_get_ticks(void)
>> {
>> if (use_icount) {
>> @@ -156,33 +162,46 @@ int64_t cpu_get_ticks(void)
>> int64_t cpu_get_clock(void)
>> {
>> int64_t ti;
>> - if (!timers_state.cpu_ticks_enabled) {
>> - return timers_state.cpu_clock_offset;
>> - } else {
>> - ti = get_clock();
>> - return ti + timers_state.cpu_clock_offset;
>> - }
>> + unsigned start;
>> +
>> + do {
>> + start = seqlock_read_begin(&timers_state.clock_seqlock);
>> + if (!timers_state.cpu_ticks_enabled) {
>> + ti = timers_state.cpu_clock_offset;
>> + } else {
>> + ti = get_clock();
>> + ti += timers_state.cpu_clock_offset;
>> + }
>> + } while (seqlock_read_check(&timers_state.clock_seqlock, start);
>> +
>> + return ti;
>> }
>>
>> /* enable cpu_get_ticks() */
>> void cpu_enable_ticks(void)
>> {
>> + /* Here, the really thing protected by seqlock is cpu_clock. */
>> + seqlock_write_lock(&timers_state.clock_seqlock);
>> if (!timers_state.cpu_ticks_enabled) {
>> timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
>> timers_state.cpu_clock_offset -= get_clock();
>> timers_state.cpu_ticks_enabled = 1;
>> }
>> + seqlock_write_unlock(&timers_state.clock_seqlock);
>> }
>>
>> /* disable cpu_get_ticks() : the clock is stopped. You must not call
>> cpu_get_ticks() after that. */
>> void cpu_disable_ticks(void)
>> {
>> + /* Here, the really thing protected by seqlock is cpu_clock. */
>> + seqlock_write_lock(&timers_state.clock_seqlock);
>> if (timers_state.cpu_ticks_enabled) {
>> timers_state.cpu_ticks_offset = cpu_get_ticks();
>> timers_state.cpu_clock_offset = cpu_get_clock();
>> timers_state.cpu_ticks_enabled = 0;
>> }
>> + seqlock_write_unlock(&timers_state.clock_seqlock);
>> }
>>
>> /* Correlation between real and virtual time is always going to be
>> @@ -364,6 +383,9 @@ static const VMStateDescription vmstate_timers = {
>>
>> void configure_icount(const char *option)
>> {
>> + QemuMutex *mutex = g_malloc0(sizeof(QemuMutex));
>> + qemu_mutex_init(mutex);
>> + seqlock_init(&timers_state.clock_seqlock, mutex);
>> vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
>> if (!option) {
>> return;
>> --
>> 1.8.1.4
>>
>>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-06 5:58 ` liu ping fan
@ 2013-08-06 7:31 ` Paolo Bonzini
0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2013-08-06 7:31 UTC (permalink / raw)
To: liu ping fan
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
MORITA Kazutaka
On 08/06/2013 07:58 AM, liu ping fan wrote:
> On Mon, Aug 5, 2013 at 9:29 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>>> In kvm mode, vm_clock may be read outside BQL.
>>
>> Not just in KVM mode (we will be able to use dataplane with TCG sooner
>> or later), actually.
>>
> Oh. But this patch does not fix cpu_get_icount()'s thread-safe issue.
> So currently, could I just change the commit log instead of fixing it?
Yeah, icount is a bit more complicated. Just change the commit log.
Paolo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-05 7:33 ` [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock Liu Ping Fan
2013-08-05 13:29 ` Paolo Bonzini
@ 2013-08-06 9:30 ` Stefan Hajnoczi
2013-08-07 5:46 ` liu ping fan
1 sibling, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 9:30 UTC (permalink / raw)
To: Liu Ping Fan
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
On Mon, Aug 05, 2013 at 03:33:24PM +0800, Liu Ping Fan wrote:
> diff --git a/cpus.c b/cpus.c
> index 85e743d..ab92db9 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -107,12 +107,17 @@ static int64_t qemu_icount;
> typedef struct TimersState {
> int64_t cpu_ticks_prev;
> int64_t cpu_ticks_offset;
> + /* QemuClock will be read out of BQL, so protect is with private lock.
> + * As for cpu_ticks, no requirement to read it outside BQL.
> + * Lock rule: innermost
> + */
Please document exactly which fields the lock protects.
> /* enable cpu_get_ticks() */
> void cpu_enable_ticks(void)
> {
> + /* Here, the really thing protected by seqlock is cpu_clock. */
What is cpu_clock?
> @@ -364,6 +383,9 @@ static const VMStateDescription vmstate_timers = {
>
> void configure_icount(const char *option)
> {
> + QemuMutex *mutex = g_malloc0(sizeof(QemuMutex));
> + qemu_mutex_init(mutex);
> + seqlock_init(&timers_state.clock_seqlock, mutex);
We always set up this mutex, so it could be a field in timers_state.
That avoids the g_malloc() without g_free().
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock
2013-08-06 9:30 ` Stefan Hajnoczi
@ 2013-08-07 5:46 ` liu ping fan
0 siblings, 0 replies; 14+ messages in thread
From: liu ping fan @ 2013-08-07 5:46 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
On Tue, Aug 6, 2013 at 5:30 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Mon, Aug 05, 2013 at 03:33:24PM +0800, Liu Ping Fan wrote:
>> diff --git a/cpus.c b/cpus.c
>> index 85e743d..ab92db9 100644
>> --- a/cpus.c
>> +++ b/cpus.c
>> @@ -107,12 +107,17 @@ static int64_t qemu_icount;
>> typedef struct TimersState {
>> int64_t cpu_ticks_prev;
>> int64_t cpu_ticks_offset;
>> + /* QemuClock will be read out of BQL, so protect is with private lock.
>> + * As for cpu_ticks, no requirement to read it outside BQL.
>> + * Lock rule: innermost
>> + */
>
> Please document exactly which fields the lock protects.
>
The lock protects cpu_clock_offset. Will fix it.
>> /* enable cpu_get_ticks() */
>> void cpu_enable_ticks(void)
>> {
>> + /* Here, the really thing protected by seqlock is cpu_clock. */
>
> What is cpu_clock?
>
cpu_clock_offset
>> @@ -364,6 +383,9 @@ static const VMStateDescription vmstate_timers = {
>>
>> void configure_icount(const char *option)
>> {
>> + QemuMutex *mutex = g_malloc0(sizeof(QemuMutex));
>> + qemu_mutex_init(mutex);
>> + seqlock_init(&timers_state.clock_seqlock, mutex);
>
> We always set up this mutex, so it could be a field in timers_state.
> That avoids the g_malloc() without g_free().
>
Will fix in next version
Thx®ards,
Pingfan
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 3/4] qemu-thread: add QemuEvent
2013-08-05 7:33 [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Liu Ping Fan
2013-08-05 7:33 ` [Qemu-devel] [PATCH 1/4] seqlock: introduce read-write seqlock Liu Ping Fan
2013-08-05 7:33 ` [Qemu-devel] [PATCH 2/4] timer: protect timers_state's clock with seqlock Liu Ping Fan
@ 2013-08-05 7:33 ` Liu Ping Fan
2013-08-05 7:33 ` [Qemu-devel] [PATCH 4/4] timer: make qemu_clock_enable sync between disable and timer's cb Liu Ping Fan
2013-08-05 10:00 ` [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Alex Bligh
4 siblings, 0 replies; 14+ messages in thread
From: Liu Ping Fan @ 2013-08-05 7:33 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
From: Paolo Bonzini <pbonzini@redhat.com>
This emulates Win32 manual-reset events using futexes or conditional
variables. Typical ways to use them are with multi-producer,
single-consumer data structures, to test for a complex condition whose
elements come from different threads:
for (;;) {
qemu_event_reset(ev);
... test complex condition ...
if (condition is true) {
break;
}
qemu_event_wait(ev);
}
Or more efficiently (but with some duplication):
... evaluate condition ...
while (!condition) {
qemu_event_reset(ev);
... evaluate condition ...
if (!condition) {
qemu_event_wait(ev);
... evaluate condition ...
}
}
QemuEvent provides a very fast userspace path in the common case when
no other thread is waiting, or the event is not changing state. It
is used to report RCU quiescent states to the thread calling
synchronize_rcu (the latter being the single consumer), and to report
call_rcu invocations to the thread that receives them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/qemu/thread-posix.h | 8 +++
include/qemu/thread-win32.h | 4 ++
include/qemu/thread.h | 7 +++
util/qemu-thread-posix.c | 116 ++++++++++++++++++++++++++++++++++++++++++++
util/qemu-thread-win32.c | 26 ++++++++++
5 files changed, 161 insertions(+)
diff --git a/include/qemu/thread-posix.h b/include/qemu/thread-posix.h
index 0f30dcc..916b2a7 100644
--- a/include/qemu/thread-posix.h
+++ b/include/qemu/thread-posix.h
@@ -21,6 +21,14 @@ struct QemuSemaphore {
#endif
};
+struct QemuEvent {
+#ifndef __linux__
+ pthread_mutex_t lock;
+ pthread_cond_t cond;
+#endif
+ unsigned value;
+};
+
struct QemuThread {
pthread_t thread;
};
diff --git a/include/qemu/thread-win32.h b/include/qemu/thread-win32.h
index 13adb95..3d58081 100644
--- a/include/qemu/thread-win32.h
+++ b/include/qemu/thread-win32.h
@@ -17,6 +17,10 @@ struct QemuSemaphore {
HANDLE sema;
};
+struct QemuEvent {
+ HANDLE event;
+};
+
typedef struct QemuThreadData QemuThreadData;
struct QemuThread {
QemuThreadData *data;
diff --git a/include/qemu/thread.h b/include/qemu/thread.h
index c02404b..3e32c65 100644
--- a/include/qemu/thread.h
+++ b/include/qemu/thread.h
@@ -7,6 +7,7 @@
typedef struct QemuMutex QemuMutex;
typedef struct QemuCond QemuCond;
typedef struct QemuSemaphore QemuSemaphore;
+typedef struct QemuEvent QemuEvent;
typedef struct QemuThread QemuThread;
#ifdef _WIN32
@@ -45,6 +46,12 @@ void qemu_sem_wait(QemuSemaphore *sem);
int qemu_sem_timedwait(QemuSemaphore *sem, int ms);
void qemu_sem_destroy(QemuSemaphore *sem);
+void qemu_event_init(QemuEvent *ev, bool init);
+void qemu_event_set(QemuEvent *ev);
+void qemu_event_reset(QemuEvent *ev);
+void qemu_event_wait(QemuEvent *ev);
+void qemu_event_destroy(QemuEvent *ev);
+
void qemu_thread_create(QemuThread *thread,
void *(*start_routine)(void *),
void *arg, int mode);
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 4489abf..8178f9b 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -20,7 +20,12 @@
#include <limits.h>
#include <unistd.h>
#include <sys/time.h>
+#ifdef __linux__
+#include <sys/syscall.h>
+#include <linux/futex.h>
+#endif
#include "qemu/thread.h"
+#include "qemu/atomic.h"
static void error_exit(int err, const char *msg)
{
@@ -268,6 +273,117 @@ void qemu_sem_wait(QemuSemaphore *sem)
#endif
}
+#ifdef __linux__
+#define futex(...) syscall(__NR_futex, __VA_ARGS__)
+
+static inline void futex_wake(QemuEvent *ev, int n)
+{
+ futex(ev, FUTEX_WAKE, n, NULL, NULL, 0);
+}
+
+static inline void futex_wait(QemuEvent *ev, unsigned val)
+{
+ futex(ev, FUTEX_WAIT, (int) val, NULL, NULL, 0);
+}
+#else
+static inline void futex_wake(QemuEvent *ev, int n)
+{
+ if (n == 1) {
+ pthread_cond_signal(&ev->cond);
+ } else {
+ pthread_cond_broadcast(&ev->cond);
+ }
+}
+
+static inline void futex_wait(QemuEvent *ev, unsigned val)
+{
+ pthread_mutex_lock(&ev->lock);
+ if (ev->value == val) {
+ pthread_cond_wait(&ev->cond, &ev->lock);
+ }
+ pthread_mutex_unlock(&ev->lock);
+}
+#endif
+
+/* Valid transitions:
+ * - free->set, when setting the event
+ * - busy->set, when setting the event, followed by futex_wake
+ * - set->free, when resetting the event
+ * - free->busy, when waiting
+ *
+ * set->busy does not happen (it can be observed from the outside but
+ * it really is set->free->busy).
+ *
+ * busy->free provably cannot happen; to enforce it, the set->free transition
+ * is done with an OR, which becomes a no-op if the event has concurrently
+ * transitioned to free or busy.
+ */
+
+#define EV_SET 0
+#define EV_FREE 1
+#define EV_BUSY -1
+
+void qemu_event_init(QemuEvent *ev, bool init)
+{
+#ifndef __linux__
+ pthread_mutex_init(&ev->lock, NULL);
+ pthread_cond_init(&ev->cond, NULL);
+#endif
+
+ ev->value = (init ? EV_SET : EV_FREE);
+}
+
+void qemu_event_destroy(QemuEvent *ev)
+{
+#ifndef __linux__
+ pthread_mutex_destroy(&ev->lock);
+ pthread_cond_destroy(&ev->cond);
+#endif
+}
+
+void qemu_event_set(QemuEvent *ev)
+{
+ if (atomic_mb_read(&ev->value) != EV_SET) {
+ if (atomic_xchg(&ev->value, EV_SET) == EV_BUSY) {
+ /* There were waiters, wake them up. */
+ futex_wake(ev, INT_MAX);
+ }
+ }
+}
+
+void qemu_event_reset(QemuEvent *ev)
+{
+ if (atomic_mb_read(&ev->value) == EV_SET) {
+ /*
+ * If there was a concurrent reset (or even reset+wait),
+ * do nothing. Otherwise change EV_SET->EV_FREE.
+ */
+ atomic_or(&ev->value, EV_FREE);
+ }
+}
+
+void qemu_event_wait(QemuEvent *ev)
+{
+ unsigned value;
+
+ value = atomic_mb_read(&ev->value);
+ if (value != EV_SET) {
+ if (value == EV_FREE) {
+ /*
+ * Leave the event reset and tell qemu_event_set that there
+ * are waiters. No need to retry, because there cannot be
+ * a concurent busy->free transition. After the CAS, the
+ * event will be either set or busy.
+ */
+ if (atomic_cmpxchg(&ev->value, EV_FREE, EV_BUSY) == EV_SET) {
+ return;
+ }
+ }
+ futex_wait(ev, EV_BUSY);
+ }
+}
+
+
void qemu_thread_create(QemuThread *thread,
void *(*start_routine)(void*),
void *arg, int mode)
diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
index 517878d..27a5217 100644
--- a/util/qemu-thread-win32.c
+++ b/util/qemu-thread-win32.c
@@ -227,6 +227,32 @@ void qemu_sem_wait(QemuSemaphore *sem)
}
}
+void qemu_event_init(QemuEvent *ev, bool init)
+{
+ /* Manual reset. */
+ ev->event = CreateEvent(NULL, TRUE, init, NULL);
+}
+
+void qemu_event_destroy(QemuEvent *ev)
+{
+ CloseHandle(ev->event);
+}
+
+void qemu_event_set(QemuEvent *ev)
+{
+ SetEvent(ev->event);
+}
+
+void qemu_event_reset(QemuEvent *ev)
+{
+ ResetEvent(ev->event);
+}
+
+void qemu_event_wait(QemuEvent *ev)
+{
+ WaitForSingleObject(ev->event, INFINITE);
+}
+
struct QemuThreadData {
/* Passed to win32_start_routine. */
void *(*start_routine)(void *);
--
1.8.1.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 4/4] timer: make qemu_clock_enable sync between disable and timer's cb
2013-08-05 7:33 [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Liu Ping Fan
` (2 preceding siblings ...)
2013-08-05 7:33 ` [Qemu-devel] [PATCH 3/4] qemu-thread: add QemuEvent Liu Ping Fan
@ 2013-08-05 7:33 ` Liu Ping Fan
2013-08-05 10:53 ` Paolo Bonzini
2013-08-05 10:00 ` [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Alex Bligh
4 siblings, 1 reply; 14+ messages in thread
From: Liu Ping Fan @ 2013-08-05 7:33 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
After disabling the QemuClock, we should make sure that no QemuTimers
are still in flight. To implement that with light overhead, we resort
to QemuEvent. The caller of disabling will wait on QemuEvent of each
timerlist.
Note, qemu_clock_enable(foo,false) can _not_ be called from timer's cb.
And the callers of qemu_clock_enable() should be sync by themselves,
not protected by this patch.
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
---
include/qemu/timer.h | 1 +
qemu-timer.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 1363316..ca09ba2 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -85,6 +85,7 @@ int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
int qemu_timeout_ns_to_ms(int64_t ns);
int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
+/* The disable of clock can not be called in timer's cb */
void qemu_clock_enable(QEMUClock *clock, bool enabled);
void qemu_clock_warp(QEMUClock *clock);
diff --git a/qemu-timer.c b/qemu-timer.c
index ebe7597..5828107 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -71,6 +71,8 @@ struct QEMUTimerList {
QLIST_ENTRY(QEMUTimerList) list;
QEMUTimerListNotifyCB *notify_cb;
void *notify_opaque;
+ /* light weight method to mark the end of timerlist's running */
+ QemuEvent ev;
};
struct QEMUTimer {
@@ -92,6 +94,7 @@ static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
QEMUTimerList *tl;
tl = g_malloc0(sizeof(QEMUTimerList));
+ qemu_event_init(&tl->ev, false);
tl->clock = clock;
QLIST_INSERT_HEAD(&clock->timerlists, tl, list);
return tl;
@@ -145,12 +148,18 @@ void qemu_clock_notify(QEMUClock *clock)
}
}
+/* The disable of clock can _not_ be called from timer's cb */
void qemu_clock_enable(QEMUClock *clock, bool enabled)
{
+ QEMUTimerList *tl;
bool old = clock->enabled;
clock->enabled = enabled;
if (enabled && !old) {
qemu_clock_notify(clock);
+ } else if (!enabled && old) {
+ QLIST_FOREACH(tl, &clock->timerlists, list) {
+ qemu_event_wait(&tl->ev);
+ }
}
}
@@ -419,6 +428,7 @@ bool timerlist_run_timers(QEMUTimerList *tl)
}
current_time = qemu_get_clock_ns(tl->clock);
+ qemu_event_reset(&tl->ev);
for(;;) {
ts = tl->active_timers;
if (!qemu_timer_expired_ns(ts, current_time)) {
@@ -432,6 +442,7 @@ bool timerlist_run_timers(QEMUTimerList *tl)
ts->cb(ts->opaque);
progress = true;
}
+ qemu_event_set(&tl->ev);
return progress;
}
--
1.8.1.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] timer: make qemu_clock_enable sync between disable and timer's cb
2013-08-05 7:33 ` [Qemu-devel] [PATCH 4/4] timer: make qemu_clock_enable sync between disable and timer's cb Liu Ping Fan
@ 2013-08-05 10:53 ` Paolo Bonzini
0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2013-08-05 10:53 UTC (permalink / raw)
To: Liu Ping Fan
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, qemu-devel, Alex Bligh,
MORITA Kazutaka
On Aug 05 2013, Liu Ping Fan wrote:
> After disabling the QemuClock, we should make sure that no QemuTimers
> are still in flight. To implement that with light overhead, we resort
> to QemuEvent. The caller of disabling will wait on QemuEvent of each
> timerlist.
>
> Note, qemu_clock_enable(foo,false) can _not_ be called from timer's cb.
> And the callers of qemu_clock_enable() should be sync by themselves,
> not protected by this patch.
>
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
> include/qemu/timer.h | 1 +
> qemu-timer.c | 11 +++++++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/include/qemu/timer.h b/include/qemu/timer.h
> index 1363316..ca09ba2 100644
> --- a/include/qemu/timer.h
> +++ b/include/qemu/timer.h
> @@ -85,6 +85,7 @@ int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
>
> int qemu_timeout_ns_to_ms(int64_t ns);
> int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
> +/* The disable of clock can not be called in timer's cb */
See below for a more verbose version of the comment. For
now leave it only in the .c file, we should add comments to
all of timer.h.
> void qemu_clock_enable(QEMUClock *clock, bool enabled);
> void qemu_clock_warp(QEMUClock *clock);
>
> diff --git a/qemu-timer.c b/qemu-timer.c
> index ebe7597..5828107 100644
> --- a/qemu-timer.c
> +++ b/qemu-timer.c
> @@ -71,6 +71,8 @@ struct QEMUTimerList {
> QLIST_ENTRY(QEMUTimerList) list;
> QEMUTimerListNotifyCB *notify_cb;
> void *notify_opaque;
> + /* light weight method to mark the end of timerlist's running */
> + QemuEvent ev;
> };
>
> struct QEMUTimer {
> @@ -92,6 +94,7 @@ static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
> QEMUTimerList *tl;
>
> tl = g_malloc0(sizeof(QEMUTimerList));
> + qemu_event_init(&tl->ev, false);
The event should start as "set", since "set" means "not inside
qemu_run_timers".
> tl->clock = clock;
> QLIST_INSERT_HEAD(&clock->timerlists, tl, list);
> return tl;
> @@ -145,12 +148,18 @@ void qemu_clock_notify(QEMUClock *clock)
> }
> }
>
> +/* The disable of clock can _not_ be called from timer's cb */
/* Disabling the clock will wait for related timerlists to stop
* executing qemu_run_timers. Thus, this functions should not
* be used from the callback of a timer that is based on @clock.
* Doing so would cause a deadlock.
*/
> void qemu_clock_enable(QEMUClock *clock, bool enabled)
> {
> + QEMUTimerList *tl;
> bool old = clock->enabled;
> clock->enabled = enabled;
> if (enabled && !old) {
> qemu_clock_notify(clock);
> + } else if (!enabled && old) {
> + QLIST_FOREACH(tl, &clock->timerlists, list) {
> + qemu_event_wait(&tl->ev);
> + }
> }
> }
>
> @@ -419,6 +428,7 @@ bool timerlist_run_timers(QEMUTimerList *tl)
> }
>
> current_time = qemu_get_clock_ns(tl->clock);
> + qemu_event_reset(&tl->ev);
Race condition here. You need to test clock->enabled while the
event is reset. Otherwise you get:
-------------------------------------------------------------------------
thread 1 is running thread 2 is running
qemu_clock_enable(foo, false) qemu_run_timers(tl);
-------------------------------------------------------------------------
** event is initially set **
if (!clock->enabled) return;
clock->enabled = false;
qemu_event_wait(&tl->ev);
return;
qemu_event_reset(&tl->ev);
invokes callback
qemu_event_set(&tl->ev);
-------------------------------------------------------------------------
violating the invariant that no callbacks are invoked after the return from
qemu_clock_enable(foo, false).
Paolo
> for(;;) {
> ts = tl->active_timers;
> if (!qemu_timer_expired_ns(ts, current_time)) {
> @@ -432,6 +442,7 @@ bool timerlist_run_timers(QEMUTimerList *tl)
> ts->cb(ts->opaque);
> progress = true;
> }
> + qemu_event_set(&tl->ev);
> return progress;
> }
>
> --
> 1.8.1.4
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff
2013-08-05 7:33 [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Liu Ping Fan
` (3 preceding siblings ...)
2013-08-05 7:33 ` [Qemu-devel] [PATCH 4/4] timer: make qemu_clock_enable sync between disable and timer's cb Liu Ping Fan
@ 2013-08-05 10:00 ` Alex Bligh
2013-08-06 5:37 ` liu ping fan
4 siblings, 1 reply; 14+ messages in thread
From: Alex Bligh @ 2013-08-05 10:00 UTC (permalink / raw)
To: Liu Ping Fan, qemu-devel
Cc: Kevin Wolf, Stefan Hajnoczi, Jan Kiszka, Alex Bligh,
Paolo Bonzini, MORITA Kazutaka
Pingfan,
--On 5 August 2013 15:33:22 +0800 Liu Ping Fan
<pingfank@linux.vnet.ibm.com> wrote:
> The patches has been rebased onto Alex's [RFC] [PATCHv5 00/16] aio /
> timers: Add AioContext timers and use ppoll
> permalink.gmane.org/gmane.comp.emulators.qemu/226333
>
> For some other complied error issue, I can not finish compiling, will fix
> it later.
>
> Changes since last version:
> 1. drop the overlap partition and leave only thread-safe stuff
> 2. For timers_state, since currently, only vm_clock can be read outside
> BQL. There is no protection with ticks(since the protection will cost
> more in read_tsc path). 3. use light weight QemuEvent to re-implement
> the qemu_clock_enable(foo,false)
I think you may need to protect a little more.
For instance, do we need to take a lock whilst traversing a QEMUTimerList?
I hope the answer to this is no. The design idea of my stuff was that only
one thread would be manipulating or traversing this list. As the notify
function is a property of the QEMUTimerList itself, no traversal is
necessary for that. However, the icount stuff (per my patch 15) does
look at the deadlines for the vm_clock QEMUTimerLists (which is the
first entry). Is that always going to be thread safe? Before the icount
stuff, I was pretty certain this did not need a lock, but perhaps
it does now. If the soonest deadline on any QEMUTimerList was stored
in a 64 bit atomic variable, this might remove the need for a lock;
it's possible that putting some memory barrier operations in might
be enough.
Also, we maintain a per-clock list of QEMUTimerLists. This list is
traversed by the icount stuff, by things adding and removing timers
(e.g. creation/deletion of AioContext) and whenever a QEMUClock is
reenabled. I think this DOES need a lock. Aside from the icount
stuff, usage is very infrequent. It's far more frequent with the
icount stuff, so it should probably be optimised for that case.
--
Alex Bligh
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff
2013-08-05 10:00 ` [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff Alex Bligh
@ 2013-08-06 5:37 ` liu ping fan
2013-08-06 6:14 ` Alex Bligh
0 siblings, 1 reply; 14+ messages in thread
From: liu ping fan @ 2013-08-06 5:37 UTC (permalink / raw)
To: Alex Bligh
Cc: Kevin Wolf, Jan Kiszka, qemu-devel, Stefan Hajnoczi,
Paolo Bonzini, MORITA Kazutaka
On Mon, Aug 5, 2013 at 6:00 PM, Alex Bligh <alex@alex.org.uk> wrote:
> Pingfan,
>
>
> --On 5 August 2013 15:33:22 +0800 Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> wrote:
>
>> The patches has been rebased onto Alex's [RFC] [PATCHv5 00/16] aio /
>> timers: Add AioContext timers and use ppoll
>> permalink.gmane.org/gmane.comp.emulators.qemu/226333
>>
>> For some other complied error issue, I can not finish compiling, will fix
>> it later.
>>
>> Changes since last version:
>> 1. drop the overlap partition and leave only thread-safe stuff
>> 2. For timers_state, since currently, only vm_clock can be read outside
>> BQL. There is no protection with ticks(since the protection will cost
>> more in read_tsc path). 3. use light weight QemuEvent to re-implement
>> the qemu_clock_enable(foo,false)
>
>
> I think you may need to protect a little more.
>
Yes. There is still race issue left. If Stefanha and you will not do
it, I am pleased to do that.
> For instance, do we need to take a lock whilst traversing a QEMUTimerList?
> I hope the answer to this is no. The design idea of my stuff was that only
> one thread would be manipulating or traversing this list. As the notify
> function is a property of the QEMUTimerList itself, no traversal is
> necessary for that. However, the icount stuff (per my patch 15) does
> look at the deadlines for the vm_clock QEMUTimerLists (which is the
> first entry). Is that always going to be thread safe? Before the icount
> stuff, I was pretty certain this did not need a lock, but perhaps
> it does now. If the soonest deadline on any QEMUTimerList was stored
> in a 64 bit atomic variable, this might remove the need for a lock;
> it's possible that putting some memory barrier operations in might
> be enough.
>
> Also, we maintain a per-clock list of QEMUTimerLists. This list is
> traversed by the icount stuff, by things adding and removing timers
> (e.g. creation/deletion of AioContext) and whenever a QEMUClock is
> reenabled. I think this DOES need a lock. Aside from the icount
> stuff, usage is very infrequent. It's far more frequent with the
> icount stuff, so it should probably be optimised for that case.
>
> --
> Alex Bligh
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4]: timers thread-safe stuff
2013-08-06 5:37 ` liu ping fan
@ 2013-08-06 6:14 ` Alex Bligh
0 siblings, 0 replies; 14+ messages in thread
From: Alex Bligh @ 2013-08-06 6:14 UTC (permalink / raw)
To: liu ping fan
Cc: Kevin Wolf, Alex Bligh, Jan Kiszka, qemu-devel, Stefan Hajnoczi,
Paolo Bonzini, MORITA Kazutaka
Pingfan,
--On 6 August 2013 13:37:02 +0800 liu ping fan <qemulist@gmail.com> wrote:
>> I think you may need to protect a little more.
>>
> Yes. There is still race issue left. If Stefanha and you will not do
> it, I am pleased to do that.
I think you've probably got a better view of what to put in than
I have. I'd just stick a couple of dumb mutexes in and wait until
someone complains about performance. Quite happy to do that but
your approach appeared a bit more nuanced.
--
Alex Bligh
^ permalink raw reply [flat|nested] 14+ messages in thread