qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Fam Zheng <famz@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v3 1/4] exec: Atomic access to bounce buffer
Date: Mon, 16 Mar 2015 15:42:11 +0800	[thread overview]
Message-ID: <20150316074211.GC15098@ad.nay.redhat.com> (raw)
In-Reply-To: <55068687.1020304@redhat.com>

On Mon, 03/16 08:30, Paolo Bonzini wrote:
> 
> 
> On 16/03/2015 06:31, Fam Zheng wrote:
> > There could be a race condition when two processes call
> > address_space_map concurrently and both want to use the bounce buffer.
> > 
> > Add an in_use flag in BounceBuffer to sync it.
> > 
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> >  exec.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/exec.c b/exec.c
> > index e97071a..4080044 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -2483,6 +2483,7 @@ typedef struct {
> >      void *buffer;
> >      hwaddr addr;
> >      hwaddr len;
> > +    bool in_use;
> >  } BounceBuffer;
> >  
> >  static BounceBuffer bounce;
> > @@ -2571,9 +2572,10 @@ void *address_space_map(AddressSpace *as,
> >      l = len;
> >      mr = address_space_translate(as, addr, &xlat, &l, is_write);
> >      if (!memory_access_is_direct(mr, is_write)) {
> > -        if (bounce.buffer) {
> > +        if (atomic_xchg(&bounce.in_use, true)) {
> >              return NULL;
> >          }
> > +        smp_mb();
> 
> smp_mb() not needed.

OK, I was confused by the Linux documentation on atomic_xchg. Now I've looked
at the right places, it is not needed. Thanks,

Fam

> 
> Ok with this change.
> 
> Paolo
> 
> >          /* Avoid unbounded allocations */
> >          l = MIN(l, TARGET_PAGE_SIZE);
> >          bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
> > @@ -2641,6 +2643,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
> >      qemu_vfree(bounce.buffer);
> >      bounce.buffer = NULL;
> >      memory_region_unref(bounce.mr);
> > +    atomic_mb_set(&bounce.in_use, false);
> >      cpu_notify_map_clients();
> >  }
> >  
> > 

  reply	other threads:[~2015-03-16  7:42 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-16  5:31 [Qemu-devel] [PATCH v3 0/4] exec: Make bounce buffer thread safe Fam Zheng
2015-03-16  5:31 ` [Qemu-devel] [PATCH v3 1/4] exec: Atomic access to bounce buffer Fam Zheng
2015-03-16  7:30   ` Paolo Bonzini
2015-03-16  7:42     ` Fam Zheng [this message]
2015-03-16  5:31 ` [Qemu-devel] [PATCH v3 2/4] exec: Protect map_client_list with mutex Fam Zheng
2015-03-16  7:33   ` Paolo Bonzini
2015-03-16  7:55     ` Fam Zheng
2015-03-16  5:31 ` [Qemu-devel] [PATCH v3 3/4] exec: Notify cpu_register_map_client caller if the bounce buffer is available Fam Zheng
2015-03-16  7:34   ` Paolo Bonzini
2015-03-16  7:44     ` Fam Zheng
2015-03-16  5:31 ` [Qemu-devel] [PATCH v3 4/4] dma-helpers: Fix race condition of continue_after_map_failure and dma_aio_cancel Fam Zheng
2015-03-16  7:36   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150316074211.GC15098@ad.nay.redhat.com \
    --to=famz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).