qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-06-25 17:26 Liu Ping Fan
@ 2013-06-25  8:45 ` Stefan Hajnoczi
  2013-06-25  9:40 ` Kevin Wolf
  1 sibling, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2013-06-25  8:45 UTC (permalink / raw)
  To: Liu Ping Fan; +Cc: Kevin Wolf, Paolo Bonzini, qemu-devel, Anthony Liguori

On Wed, Jun 26, 2013 at 01:26:25AM +0800, Liu Ping Fan wrote:
> BH will be used outside big lock, so introduce lock to protect
> between the writers, ie, bh's adders and deleter. The lock only
> affects the writers and bh's callback does not take this extra lock.
> Note that for the same AioContext, aio_bh_poll() can not run in
> parallel yet.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> ---------
> v4->v5
>   fix some grammar issue
> v3->v4
>   resolve memory order of bh->idle and ->scheduled
>   add comments for qemu_bh_delete/cancel
> ---
>  async.c             | 32 ++++++++++++++++++++++++++++++--
>  include/block/aio.h |  7 +++++++
>  2 files changed, 37 insertions(+), 2 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-06-25 17:26 Liu Ping Fan
  2013-06-25  8:45 ` Stefan Hajnoczi
@ 2013-06-25  9:40 ` Kevin Wolf
  2013-06-25 10:09   ` Paolo Bonzini
  2013-06-26  9:46   ` liu ping fan
  1 sibling, 2 replies; 11+ messages in thread
From: Kevin Wolf @ 2013-06-25  9:40 UTC (permalink / raw)
  To: Liu Ping Fan; +Cc: Paolo Bonzini, Stefan Hajnoczi, qemu-devel, Anthony Liguori

Am 25.06.2013 um 19:26 hat Liu Ping Fan geschrieben:
> BH will be used outside big lock, so introduce lock to protect
> between the writers, ie, bh's adders and deleter. The lock only
> affects the writers and bh's callback does not take this extra lock.
> Note that for the same AioContext, aio_bh_poll() can not run in
> parallel yet.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> 
> ---------

Please use exactly three dashes so that 'git am' recognises it as the
end of the commit message.

This doesn't compile yet because smp_read_barrier_depends() isn't merged
yet, so maybe there's still time for some nitpicking: Wouldn't using
atomic_set/get better document things and make them easier to read?

It should be correct anyway, so:

Reviewed-by: Kevin Wolf <kwolf@redhat.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-06-25  9:40 ` Kevin Wolf
@ 2013-06-25 10:09   ` Paolo Bonzini
  2013-06-26  9:46   ` liu ping fan
  1 sibling, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2013-06-25 10:09 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Stefan Hajnoczi, Liu Ping Fan, Anthony Liguori, qemu-devel

Il 25/06/2013 11:40, Kevin Wolf ha scritto:
> Am 25.06.2013 um 19:26 hat Liu Ping Fan geschrieben:
>> BH will be used outside big lock, so introduce lock to protect
>> between the writers, ie, bh's adders and deleter. The lock only
>> affects the writers and bh's callback does not take this extra lock.
>> Note that for the same AioContext, aio_bh_poll() can not run in
>> parallel yet.
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>>
>> ---------
> 
> Please use exactly three dashes so that 'git am' recognises it as the
> end of the commit message.
> 
> This doesn't compile yet because smp_read_barrier_depends() isn't merged
> yet, so maybe there's still time for some nitpicking: Wouldn't using
> atomic_set/get better document things and make them easier to read?

Good idea.

But it'll be a while before I merge the atomics patch, so it's perhaps
better to get it in via the block branch (same as TLS).

Paolo

> It should be correct anyway, so:
> 
> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
@ 2013-06-25 17:26 Liu Ping Fan
  2013-06-25  8:45 ` Stefan Hajnoczi
  2013-06-25  9:40 ` Kevin Wolf
  0 siblings, 2 replies; 11+ messages in thread
From: Liu Ping Fan @ 2013-06-25 17:26 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Anthony Liguori

BH will be used outside big lock, so introduce lock to protect
between the writers, ie, bh's adders and deleter. The lock only
affects the writers and bh's callback does not take this extra lock.
Note that for the same AioContext, aio_bh_poll() can not run in
parallel yet.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>

---------
v4->v5
  fix some grammar issue
v3->v4
  resolve memory order of bh->idle and ->scheduled
  add comments for qemu_bh_delete/cancel
---
 async.c             | 32 ++++++++++++++++++++++++++++++--
 include/block/aio.h |  7 +++++++
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index 90fe906..e73b93c 100644
--- a/async.c
+++ b/async.c
@@ -47,11 +47,16 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
     bh->ctx = ctx;
     bh->cb = cb;
     bh->opaque = opaque;
+    qemu_mutex_lock(&ctx->bh_lock);
     bh->next = ctx->first_bh;
+    /* Make sure that the members are ready before putting bh into list */
+    smp_wmb();
     ctx->first_bh = bh;
+    qemu_mutex_unlock(&ctx->bh_lock);
     return bh;
 }
 
+/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
 int aio_bh_poll(AioContext *ctx)
 {
     QEMUBH *bh, **bhp, *next;
@@ -61,9 +66,15 @@ int aio_bh_poll(AioContext *ctx)
 
     ret = 0;
     for (bh = ctx->first_bh; bh; bh = next) {
+        /* Make sure that fetching bh happens before accessing its members */
+        smp_read_barrier_depends();
         next = bh->next;
         if (!bh->deleted && bh->scheduled) {
             bh->scheduled = 0;
+            /* Paired with write barrier in bh schedule to ensure reading for
+             * idle & callbacks coming after bh's scheduling.
+             */
+            smp_rmb();
             if (!bh->idle)
                 ret = 1;
             bh->idle = 0;
@@ -75,6 +86,7 @@ int aio_bh_poll(AioContext *ctx)
 
     /* remove deleted bhs */
     if (!ctx->walking_bh) {
+        qemu_mutex_lock(&ctx->bh_lock);
         bhp = &ctx->first_bh;
         while (*bhp) {
             bh = *bhp;
@@ -85,6 +97,7 @@ int aio_bh_poll(AioContext *ctx)
                 bhp = &bh->next;
             }
         }
+        qemu_mutex_unlock(&ctx->bh_lock);
     }
 
     return ret;
@@ -94,24 +107,38 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
 {
     if (bh->scheduled)
         return;
-    bh->scheduled = 1;
     bh->idle = 1;
+    /* Make sure that idle & any writes needed by the callback are done
+     * before the locations are read in the aio_bh_poll.
+     */
+    smp_wmb();
+    bh->scheduled = 1;
 }
 
 void qemu_bh_schedule(QEMUBH *bh)
 {
     if (bh->scheduled)
         return;
-    bh->scheduled = 1;
     bh->idle = 0;
+    /* Make sure that idle & any writes needed by the callback are done
+     * before the locations are read in the aio_bh_poll.
+     */
+    smp_wmb();
+    bh->scheduled = 1;
     aio_notify(bh->ctx);
 }
 
+
+/* This func is async.
+ */
 void qemu_bh_cancel(QEMUBH *bh)
 {
     bh->scheduled = 0;
 }
 
+/* This func is async.The bottom half will do the delete action at the finial
+ * end.
+ */
 void qemu_bh_delete(QEMUBH *bh)
 {
     bh->scheduled = 0;
@@ -211,6 +238,7 @@ AioContext *aio_context_new(void)
     ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
     ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
     ctx->thread_pool = NULL;
+    qemu_mutex_init(&ctx->bh_lock);
     event_notifier_init(&ctx->notifier, false);
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..cc77771 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -17,6 +17,7 @@
 #include "qemu-common.h"
 #include "qemu/queue.h"
 #include "qemu/event_notifier.h"
+#include "qemu/thread.h"
 
 typedef struct BlockDriverAIOCB BlockDriverAIOCB;
 typedef void BlockDriverCompletionFunc(void *opaque, int ret);
@@ -53,6 +54,8 @@ typedef struct AioContext {
      */
     int walking_handlers;
 
+    /* lock to protect between bh's adders and deleter */
+    QemuMutex bh_lock;
     /* Anchor of the list of Bottom Halves belonging to the context */
     struct QEMUBH *first_bh;
 
@@ -127,6 +130,8 @@ void aio_notify(AioContext *ctx);
  * aio_bh_poll: Poll bottom halves for an AioContext.
  *
  * These are internal functions used by the QEMU main loop.
+ * And notice that multiple occurrences of aio_bh_poll cannot
+ * be called concurrently
  */
 int aio_bh_poll(AioContext *ctx);
 
@@ -163,6 +168,8 @@ void qemu_bh_cancel(QEMUBH *bh);
  * Deleting a bottom half frees the memory that was allocated for it by
  * qemu_bh_new.  It also implies canceling the bottom half if it was
  * scheduled.
+ * This func is async. The bottom half will do the delete action at the finial
+ * end.
  *
  * @bh: The bottom half to be deleted.
  */
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-06-25  9:40 ` Kevin Wolf
  2013-06-25 10:09   ` Paolo Bonzini
@ 2013-06-26  9:46   ` liu ping fan
  2013-07-05 13:46     ` Stefan Hajnoczi
  1 sibling, 1 reply; 11+ messages in thread
From: liu ping fan @ 2013-06-26  9:46 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Paolo Bonzini, Stefan Hajnoczi, qemu-devel, Anthony Liguori

On Tue, Jun 25, 2013 at 5:40 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> Am 25.06.2013 um 19:26 hat Liu Ping Fan geschrieben:
>> BH will be used outside big lock, so introduce lock to protect
>> between the writers, ie, bh's adders and deleter. The lock only
>> affects the writers and bh's callback does not take this extra lock.
>> Note that for the same AioContext, aio_bh_poll() can not run in
>> parallel yet.
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>>
>> ---------
>
> Please use exactly three dashes so that 'git am' recognises it as the
> end of the commit message.
>
Sorry, I will notice at the next time.  Should I re-post it?

Thanks and regards,
Pingfan
> This doesn't compile yet because smp_read_barrier_depends() isn't merged
> yet, so maybe there's still time for some nitpicking: Wouldn't using
> atomic_set/get better document things and make them easier to read?
>
> It should be correct anyway, so:
>
> Reviewed-by: Kevin Wolf <kwolf@redhat.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-06-26  9:46   ` liu ping fan
@ 2013-07-05 13:46     ` Stefan Hajnoczi
  2013-07-05 14:12       ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2013-07-05 13:46 UTC (permalink / raw)
  To: liu ping fan
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel,
	Stefan Hajnoczi

On Wed, Jun 26, 2013 at 05:46:11PM +0800, liu ping fan wrote:
> On Tue, Jun 25, 2013 at 5:40 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> > Am 25.06.2013 um 19:26 hat Liu Ping Fan geschrieben:
> >> BH will be used outside big lock, so introduce lock to protect
> >> between the writers, ie, bh's adders and deleter. The lock only
> >> affects the writers and bh's callback does not take this extra lock.
> >> Note that for the same AioContext, aio_bh_poll() can not run in
> >> parallel yet.
> >>
> >> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> >>
> >> ---------
> >
> > Please use exactly three dashes so that 'git am' recognises it as the
> > end of the commit message.
> >
> Sorry, I will notice at the next time.  Should I re-post it?

Hi Ping Fan,
I'm revisiting this patch but already sent the block pull request for
this week.  Therefore I suggest you repost - that will make it easier
for Kevin to find the patch on the list next week when he's merging
patches.

Stefan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-07-05 13:46     ` Stefan Hajnoczi
@ 2013-07-05 14:12       ` Paolo Bonzini
  0 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2013-07-05 14:12 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Stefan Hajnoczi, liu ping fan, Anthony Liguori,
	qemu-devel

Il 05/07/2013 15:46, Stefan Hajnoczi ha scritto:
> On Wed, Jun 26, 2013 at 05:46:11PM +0800, liu ping fan wrote:
>> On Tue, Jun 25, 2013 at 5:40 PM, Kevin Wolf <kwolf@redhat.com> wrote:
>>> Am 25.06.2013 um 19:26 hat Liu Ping Fan geschrieben:
>>>> BH will be used outside big lock, so introduce lock to protect
>>>> between the writers, ie, bh's adders and deleter. The lock only
>>>> affects the writers and bh's callback does not take this extra lock.
>>>> Note that for the same AioContext, aio_bh_poll() can not run in
>>>> parallel yet.
>>>>
>>>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>>>>
>>>> ---------
>>>
>>> Please use exactly three dashes so that 'git am' recognises it as the
>>> end of the commit message.
>>>
>> Sorry, I will notice at the next time.  Should I re-post it?
> 
> Hi Ping Fan,
> I'm revisiting this patch but already sent the block pull request for
> this week.  Therefore I suggest you repost - that will make it easier
> for Kevin to find the patch on the list next week when he's merging
> patches.

In the meanwhile, I've sent a pull request for the atomics header that
this patch depends on.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
@ 2013-07-07 10:00 Liu Ping Fan
  2013-07-07 12:31 ` Andreas Färber
  2013-07-15  9:28 ` Stefan Hajnoczi
  0 siblings, 2 replies; 11+ messages in thread
From: Liu Ping Fan @ 2013-07-07 10:00 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi

From: Liu Ping Fan <qemulist@gmail.com>

BH will be used outside big lock, so introduce lock to protect
between the writers, ie, bh's adders and deleter. The lock only
affects the writers and bh's callback does not take this extra lock.
Note that for the same AioContext, aio_bh_poll() can not run in
parallel yet.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
Repost it, and thanks Paolo for having sent pull request for the
atomics header that this patch depends on.
---
 async.c             | 32 ++++++++++++++++++++++++++++++--
 include/block/aio.h |  7 +++++++
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index 90fe906..e73b93c 100644
--- a/async.c
+++ b/async.c
@@ -47,11 +47,16 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
     bh->ctx = ctx;
     bh->cb = cb;
     bh->opaque = opaque;
+    qemu_mutex_lock(&ctx->bh_lock);
     bh->next = ctx->first_bh;
+    /* Make sure that the members are ready before putting bh into list */
+    smp_wmb();
     ctx->first_bh = bh;
+    qemu_mutex_unlock(&ctx->bh_lock);
     return bh;
 }
 
+/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
 int aio_bh_poll(AioContext *ctx)
 {
     QEMUBH *bh, **bhp, *next;
@@ -61,9 +66,15 @@ int aio_bh_poll(AioContext *ctx)
 
     ret = 0;
     for (bh = ctx->first_bh; bh; bh = next) {
+        /* Make sure that fetching bh happens before accessing its members */
+        smp_read_barrier_depends();
         next = bh->next;
         if (!bh->deleted && bh->scheduled) {
             bh->scheduled = 0;
+            /* Paired with write barrier in bh schedule to ensure reading for
+             * idle & callbacks coming after bh's scheduling.
+             */
+            smp_rmb();
             if (!bh->idle)
                 ret = 1;
             bh->idle = 0;
@@ -75,6 +86,7 @@ int aio_bh_poll(AioContext *ctx)
 
     /* remove deleted bhs */
     if (!ctx->walking_bh) {
+        qemu_mutex_lock(&ctx->bh_lock);
         bhp = &ctx->first_bh;
         while (*bhp) {
             bh = *bhp;
@@ -85,6 +97,7 @@ int aio_bh_poll(AioContext *ctx)
                 bhp = &bh->next;
             }
         }
+        qemu_mutex_unlock(&ctx->bh_lock);
     }
 
     return ret;
@@ -94,24 +107,38 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
 {
     if (bh->scheduled)
         return;
-    bh->scheduled = 1;
     bh->idle = 1;
+    /* Make sure that idle & any writes needed by the callback are done
+     * before the locations are read in the aio_bh_poll.
+     */
+    smp_wmb();
+    bh->scheduled = 1;
 }
 
 void qemu_bh_schedule(QEMUBH *bh)
 {
     if (bh->scheduled)
         return;
-    bh->scheduled = 1;
     bh->idle = 0;
+    /* Make sure that idle & any writes needed by the callback are done
+     * before the locations are read in the aio_bh_poll.
+     */
+    smp_wmb();
+    bh->scheduled = 1;
     aio_notify(bh->ctx);
 }
 
+
+/* This func is async.
+ */
 void qemu_bh_cancel(QEMUBH *bh)
 {
     bh->scheduled = 0;
 }
 
+/* This func is async.The bottom half will do the delete action at the finial
+ * end.
+ */
 void qemu_bh_delete(QEMUBH *bh)
 {
     bh->scheduled = 0;
@@ -211,6 +238,7 @@ AioContext *aio_context_new(void)
     ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
     ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
     ctx->thread_pool = NULL;
+    qemu_mutex_init(&ctx->bh_lock);
     event_notifier_init(&ctx->notifier, false);
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..cc77771 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -17,6 +17,7 @@
 #include "qemu-common.h"
 #include "qemu/queue.h"
 #include "qemu/event_notifier.h"
+#include "qemu/thread.h"
 
 typedef struct BlockDriverAIOCB BlockDriverAIOCB;
 typedef void BlockDriverCompletionFunc(void *opaque, int ret);
@@ -53,6 +54,8 @@ typedef struct AioContext {
      */
     int walking_handlers;
 
+    /* lock to protect between bh's adders and deleter */
+    QemuMutex bh_lock;
     /* Anchor of the list of Bottom Halves belonging to the context */
     struct QEMUBH *first_bh;
 
@@ -127,6 +130,8 @@ void aio_notify(AioContext *ctx);
  * aio_bh_poll: Poll bottom halves for an AioContext.
  *
  * These are internal functions used by the QEMU main loop.
+ * And notice that multiple occurrences of aio_bh_poll cannot
+ * be called concurrently
  */
 int aio_bh_poll(AioContext *ctx);
 
@@ -163,6 +168,8 @@ void qemu_bh_cancel(QEMUBH *bh);
  * Deleting a bottom half frees the memory that was allocated for it by
  * qemu_bh_new.  It also implies canceling the bottom half if it was
  * scheduled.
+ * This func is async. The bottom half will do the delete action at the finial
+ * end.
  *
  * @bh: The bottom half to be deleted.
  */
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-07-07 10:00 [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant Liu Ping Fan
@ 2013-07-07 12:31 ` Andreas Färber
  2013-07-08  1:58   ` liu ping fan
  2013-07-15  9:28 ` Stefan Hajnoczi
  1 sibling, 1 reply; 11+ messages in thread
From: Andreas Färber @ 2013-07-07 12:31 UTC (permalink / raw)
  To: Liu Ping Fan; +Cc: Kevin Wolf, Paolo Bonzini, qemu-devel, Stefan Hajnoczi

Am 07.07.2013 12:00, schrieb Liu Ping Fan:
> From: Liu Ping Fan <qemulist@gmail.com>
> 
> BH will be used outside big lock, so introduce lock to protect
> between the writers, ie, bh's adders and deleter. The lock only
> affects the writers and bh's callback does not take this extra lock.
> Note that for the same AioContext, aio_bh_poll() can not run in
> parallel yet.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> Repost it, and thanks Paolo for having sent pull request for the
> atomics header that this patch depends on.

Unlike the previous v5, From and Signed-off-by differ - intentional?
Could probably be fixed up by Kevin if not.

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-07-07 12:31 ` Andreas Färber
@ 2013-07-08  1:58   ` liu ping fan
  0 siblings, 0 replies; 11+ messages in thread
From: liu ping fan @ 2013-07-08  1:58 UTC (permalink / raw)
  To: Andreas Färber
  Cc: Kevin Wolf, Paolo Bonzini, qemu-devel, Stefan Hajnoczi

On Sun, Jul 7, 2013 at 8:31 PM, Andreas Färber <afaerber@suse.de> wrote:
> Am 07.07.2013 12:00, schrieb Liu Ping Fan:
>> From: Liu Ping Fan <qemulist@gmail.com>
>>
>> BH will be used outside big lock, so introduce lock to protect
>> between the writers, ie, bh's adders and deleter. The lock only
>> affects the writers and bh's callback does not take this extra lock.
>> Note that for the same AioContext, aio_bh_poll() can not run in
>> parallel yet.
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>> Repost it, and thanks Paolo for having sent pull request for the
>> atomics header that this patch depends on.
>
> Unlike the previous v5, From and Signed-off-by differ - intentional?
> Could probably be fixed up by Kevin if not.
>
Change the unwanted notes after the scissor line, and it is just a
reminder about dependency of this patch.

Regards,
Pingfan

> Regards,
> Andreas
>
> --
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant
  2013-07-07 10:00 [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant Liu Ping Fan
  2013-07-07 12:31 ` Andreas Färber
@ 2013-07-15  9:28 ` Stefan Hajnoczi
  1 sibling, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2013-07-15  9:28 UTC (permalink / raw)
  To: Liu Ping Fan; +Cc: Kevin Wolf, Paolo Bonzini, qemu-devel, Stefan Hajnoczi

On Sun, Jul 07, 2013 at 06:00:17PM +0800, Liu Ping Fan wrote:
> @@ -211,6 +238,7 @@ AioContext *aio_context_new(void)
>      ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
>      ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
>      ctx->thread_pool = NULL;
> +    qemu_mutex_init(&ctx->bh_lock);
>      event_notifier_init(&ctx->notifier, false);
>      aio_set_event_notifier(ctx, &ctx->notifier, 
>                             (EventNotifierHandler *)

The mutex should be destroyed in aio_ctx_finalize().

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-07-15  9:28 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-07 10:00 [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant Liu Ping Fan
2013-07-07 12:31 ` Andreas Färber
2013-07-08  1:58   ` liu ping fan
2013-07-15  9:28 ` Stefan Hajnoczi
  -- strict thread matches above, loose matches on Subject: below --
2013-06-25 17:26 Liu Ping Fan
2013-06-25  8:45 ` Stefan Hajnoczi
2013-06-25  9:40 ` Kevin Wolf
2013-06-25 10:09   ` Paolo Bonzini
2013-06-26  9:46   ` liu ping fan
2013-07-05 13:46     ` Stefan Hajnoczi
2013-07-05 14:12       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).