xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Tim Deegan <tim@xen.org>
To: Robert Phillips <robert.phillips@citrix.com>
Cc: Kouya Shimura <kouya@jp.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH] x86/hap: fix race condition between ENABLE_LOGDIRTY and track_dirty_vram hypercall
Date: Thu, 6 Dec 2012 09:32:06 +0000	[thread overview]
Message-ID: <20121206093206.GA82725@ocelot.phlegethon.org> (raw)
In-Reply-To: <048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>

Hi,

At 12:59 -0500 on 03 Dec (1354539567), Robert Phillips wrote:
> > Robert, in your patch you do wrap this all in the paging_lock, but then
> > unlock to call various enable and disable routines.  Is there a version
> > of this race condition there, where some other CPU might call
> > LOG_DIRTY_ENABLE while you've temporarily dropped the lock?
> 
> My proposed patch does not modify the problematic locking code so, 
> unfortunately, it preserves the race condition that Kouya Shimura 
> has discovered.  
> 
> I question whether his proposed patch would be suitable for the 
> multiple frame buffer situation that my proposed patch addresses.
> It is possible that a guest might be updating its frame buffers when 
> live migration starts, and the same race would result.
> 
> I think the domain.arch.paging.log_dirty  function pointers are problematic.
> They are modified and executed without benefit of locking.
> 
> I am uncomfortable with adding another lock.
> 
> I will look at updating my patch to avoid the race and will (hopefully) 
> avoid adding another lock.

Thanks.  I think the paging_lock can probably cover everything we need
here.  These are toolstack operations and should be fairly rare, and HAP
can do most of its work without the paging_lock.

Also, in the next version can you please update this section:

+int hap_track_dirty_vram(struct domain *d,
+                         unsigned long begin_pfn,
+                         unsigned long nr,
+                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+{
+    long rc = 0;
+    dv_dirty_vram_t *dirty_vram;
+       
+    paging_lock(d);
+    dirty_vram = d->arch.hvm_domain.dirty_vram;
+    if ( nr )
+    {
+        dv_range_t *range = NULL;
+        int size = ( nr + BITS_PER_LONG - 1 ) & ~( BITS_PER_LONG - 1 );
+        uint8_t dirty_bitmap[size];

not to allocate a guest-specified amount of stack memory.  This is one
of the things recently found and fixed in the existing code as XSA-27.
http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/rev/53ef1f35a0f8

Cheers,

Tim.

  reply	other threads:[~2012-12-06  9:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-29  7:02 [PATCH] x86/hap: fix race condition between ENABLE_LOGDIRTY and track_dirty_vram hypercall Kouya Shimura
2012-11-29 15:40 ` Tim Deegan
2012-12-03 17:59   ` Robert Phillips
2012-12-06  9:32     ` Tim Deegan [this message]
2012-12-06 10:36       ` Robert Phillips

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121206093206.GA82725@ocelot.phlegethon.org \
    --to=tim@xen.org \
    --cc=kouya@jp.fujitsu.com \
    --cc=robert.phillips@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).