xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Andres Lagar Cavilla <andres@lagarcavilla.org>
Cc: Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2/2] Xen/mem_event: Prevent underflow of vcpu pause counts
Date: Thu, 17 Jul 2014 19:55:05 +0100	[thread overview]
Message-ID: <53C81C09.6070802@citrix.com> (raw)
In-Reply-To: <CADzFZPtK090=dPM7ZTbw4KFLV1PDYkmFUKQVEpFKhMDkoOcXJQ@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 6785 bytes --]

On 17/07/14 19:38, Andres Lagar Cavilla wrote:
> On Thu, Jul 17, 2014 at 9:10 AM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com
>     <mailto:andrew.cooper3@citrix.com>>
>     CC: Jan Beulich <JBeulich@suse.com <mailto:JBeulich@suse.com>>
>     CC: Tim Deegan <tim@xen.org <mailto:tim@xen.org>>
>     CC: Andres Lagar-Cavilla <andres@lagarcavilla.org
>     <mailto:andres@lagarcavilla.org>>
>
> Not so sure about this one, see below. 
>
>     ---
>      xen/arch/x86/hvm/hvm.c          |    2 +-
>      xen/arch/x86/mm/mem_event.c     |   14 ++++++++++++++
>      xen/arch/x86/mm/mem_sharing.c   |    4 ++--
>      xen/arch/x86/mm/p2m.c           |    8 ++++----
>      xen/include/asm-x86/mem_event.h |    3 +++
>      xen/include/xen/sched.h         |    2 ++
>      6 files changed, 26 insertions(+), 7 deletions(-)
>
>     diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>     index ef2411c..efd79b8 100644
>     --- a/xen/arch/x86/hvm/hvm.c
>     +++ b/xen/arch/x86/hvm/hvm.c
>     @@ -6113,7 +6113,7 @@ static int hvm_memory_event_traps(long p,
>     uint32_t reason,
>          if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync )
>          {
>              req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>     -        vcpu_pause_nosync(v);
>     +        mem_event_vcpu_pause(v);
>          }
>
>          req.gfn = value;
>     diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
>     index 40ae841..ef5197c 100644
>     --- a/xen/arch/x86/mm/mem_event.c
>     +++ b/xen/arch/x86/mm/mem_event.c
>     @@ -663,6 +663,20 @@ int mem_event_domctl(struct domain *d,
>     xen_domctl_mem_event_op_t *mec,
>          return rc;
>      }
>
>     +void mem_event_vcpu_pause(struct vcpu *v)
>     +{
>     +    ASSERT(v == current);
>     +
>     +    /* Ensure we don't try to repeatedly pause the same vcpu. */
>     +    BUG_ON(test_and_set_bool(v->paused_for_mem_event) != 0);
>
> This is a problem. It relies on a vcpu being able to cause a single
> mem event during a vmexit. I don't think that can be guaranteed. While
> I can't pinpoint the exact conversation from years ago, it is not hard
> to imagine scenarios in which an mmio emulation can touch multiple
> pages. So I don't think we can BUG_ON at all.

Hmm - Would the emulator not fail entirely after first fail only?

Still - it is probably better to accept multiple pause requests.

>     +    vcpu_pause_nosync(v);
>     +}
>     +
>     +void mem_event_vcpu_unpause(struct vcpu *v)
>     +{
>     +    if ( test_and_clear_bool(v->paused_for_mem_event) )
>
> And now that we consider more than one mem event piling up to pause a
> vcpu, this has to become an atomic counter, which unpauses on zero,
> and takes care of underflow.

I shall spin a v2 with this problem addressed.  I suspect it will look
curiously similar to the DOMCTL_{,un}pausedomain refcounting fix which
suffered from exactly the same problem wrt underflow/overflow of the
pause count.

~Andrew

>
> This is great and thanks for catching these things.
> Andres 
>
>     +        vcpu_unpause(v);
>     +}
>
>      /*
>       * Local variables:
>     diff --git a/xen/arch/x86/mm/mem_sharing.c
>     b/xen/arch/x86/mm/mem_sharing.c
>     index ec99266..79188b9 100644
>     --- a/xen/arch/x86/mm/mem_sharing.c
>     +++ b/xen/arch/x86/mm/mem_sharing.c
>     @@ -568,7 +568,7 @@ int mem_sharing_notify_enomem(struct domain
>     *d, unsigned long gfn,
>          if ( v->domain == d )
>          {
>              req.flags = MEM_EVENT_FLAG_VCPU_PAUSED;
>     -        vcpu_pause_nosync(v);
>     +        mem_event_vcpu_pause(v);
>          }
>
>          req.p2mt = p2m_ram_shared;
>     @@ -609,7 +609,7 @@ int mem_sharing_sharing_resume(struct domain *d)
>
>              /* Unpause domain/vcpu */
>              if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>     -            vcpu_unpause(v);
>     +            mem_event_vcpu_unpause(v);
>          }
>
>          return 0;
>     diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>     index f213a39..2c7bc0f 100644
>     --- a/xen/arch/x86/mm/p2m.c
>     +++ b/xen/arch/x86/mm/p2m.c
>     @@ -1158,7 +1158,7 @@ void p2m_mem_paging_populate(struct domain
>     *d, unsigned long gfn)
>          /* Pause domain if request came from guest and gfn has paging
>     type */
>          if ( p2m_is_paging(p2mt) && v->domain == d )
>          {
>     -        vcpu_pause_nosync(v);
>     +        mem_event_vcpu_pause(v);
>              req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>          }
>          /* No need to inform pager if the gfn is not in the page-out
>     path */
>     @@ -1319,7 +1319,7 @@ void p2m_mem_paging_resume(struct domain *d)
>              }
>              /* Unpause domain */
>              if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>     -            vcpu_unpause(v);
>     +            mem_event_vcpu_unpause(v);
>          }
>      }
>
>     @@ -1414,7 +1414,7 @@ bool_t p2m_mem_access_check(paddr_t gpa,
>     bool_t gla_valid, unsigned long gla,
>
>          /* Pause the current VCPU */
>          if ( p2ma != p2m_access_n2rwx )
>     -        vcpu_pause_nosync(v);
>     +        mem_event_vcpu_pause(v);
>
>          /* VCPU may be paused, return whether we promoted
>     automatically */
>          return (p2ma == p2m_access_n2rwx);
>     @@ -1440,7 +1440,7 @@ void p2m_mem_access_resume(struct domain *d)
>
>              /* Unpause domain */
>              if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>     -            vcpu_unpause(v);
>     +            mem_event_vcpu_unpause(v);
>          }
>      }
>
>     diff --git a/xen/include/asm-x86/mem_event.h
>     b/xen/include/asm-x86/mem_event.h
>     index 045ef9b..ed4481a 100644
>     --- a/xen/include/asm-x86/mem_event.h
>     +++ b/xen/include/asm-x86/mem_event.h
>     @@ -66,6 +66,9 @@ int do_mem_event_op(int op, uint32_t domain,
>     void *arg);
>      int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t
>     *mec,
>                           XEN_GUEST_HANDLE_PARAM(void) u_domctl);
>
>     +void mem_event_vcpu_pause(struct vcpu *v);
>     +void mem_event_vcpu_unpause(struct vcpu *v);
>     +
>      #endif /* __MEM_EVENT_H__ */
>
>
>     diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>     index 2f876f5..ea33b7d 100644
>     --- a/xen/include/xen/sched.h
>     +++ b/xen/include/xen/sched.h
>     @@ -199,6 +199,8 @@ struct vcpu
>          bool_t           paused_for_shutdown;
>          /* VCPU need affinity restored */
>          bool_t           affinity_broken;
>     +    /* VCPU paused for mem event reply. */
>     +    bool_t           paused_for_mem_event;
>
>
>          /*
>     --
>     1.7.10.4
>
>


[-- Attachment #1.2: Type: text/html, Size: 11880 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2014-07-17 18:55 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-17 13:10 [PATCH 0/2] Xen/mem_event: Do not rely on the toolstack being bug-free Andrew Cooper
2014-07-17 13:10 ` [PATCH 1/2] Xen/mem_event: Validate the response vcpu_id before acting on it Andrew Cooper
2014-07-17 18:33   ` Andres Lagar Cavilla
2014-07-17 13:10 ` [PATCH 2/2] Xen/mem_event: Prevent underflow of vcpu pause counts Andrew Cooper
2014-07-17 18:38   ` Andres Lagar Cavilla
     [not found]     ` <CAGU+auv8zMj+xqU8KhbQSZXM+J+HovjV=TZMab5Z+nzNCvpjaQ@mail.gmail.com>
2014-07-17 18:51       ` Aravindh Puthiyaparambil (aravindp)
2014-07-17 18:54         ` Andres Lagar Cavilla
2014-07-17 18:57           ` Aravindh Puthiyaparambil (aravindp)
2014-07-17 19:07           ` Andrew Cooper
2014-07-17 19:18             ` Aravindh Puthiyaparambil (aravindp)
2014-07-17 18:55     ` Andrew Cooper [this message]
2014-07-18  9:42     ` Ian Campbell
2014-07-18 10:41   ` [PATCH v2 " Andrew Cooper
2014-07-18 13:47     ` Razvan Cojocaru
2014-07-18 13:53     ` [PATCH v3 " Andrew Cooper
2014-07-18 16:37       ` Andres Lagar Cavilla
2014-07-18 16:44         ` Andrew Cooper
2014-07-18 17:29       ` Aravindh Puthiyaparambil (aravindp)
2014-07-17 13:23 ` [PATCH 0/2] Xen/mem_event: Do not rely on the toolstack being bug-free Tim Deegan
2014-07-17 14:40 ` Razvan Cojocaru
2014-07-17 14:46   ` Andrew Cooper
2014-07-17 14:50     ` Razvan Cojocaru
     [not found] ` <CAGU+auuzOr5HSErrxmyhtxtP74gn=0L5TAZGR8FWBF6MeGFxUA@mail.gmail.com>
2014-07-17 19:01   ` Aravindh Puthiyaparambil (aravindp)
2014-07-17 20:26     ` Razvan Cojocaru
2014-07-17 22:17       ` Tamas Lengyel
2014-07-17 22:42         ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53C81C09.6070802@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andres@lagarcavilla.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).