xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Yu Zhang' <yu.c.zhang@linux.intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"zhiyuan.lv@intel.com" <zhiyuan.lv@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
Date: Mon, 3 Apr 2017 08:16:20 +0000	[thread overview]
Message-ID: <73e687b2361b41218c9b1fe09d3b2c62@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <1491135867-623-7-git-send-email-yu.c.zhang@linux.intel.com>

> -----Original Message-----
> From: Yu Zhang [mailto:yu.c.zhang@linux.intel.com]
> Sent: 02 April 2017 13:24
> To: xen-devel@lists.xen.org
> Cc: zhiyuan.lv@intel.com; Paul Durrant <Paul.Durrant@citrix.com>; Jan
> Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>
> Subject: [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding
> p2m_ioreq_server entries when an ioreq server unmaps.
> 
> After an ioreq server has unmapped, the remaining p2m_ioreq_server
> entries need to be reset back to p2m_ram_rw. This patch does this
> synchronously by iterating the p2m table.
> 
> The synchronous resetting is necessary because we need to guarantee
> the p2m table is clean before another ioreq server is mapped. And
> since the sweeping of p2m table could be time consuming, it is done
> with hypercall continuation.
> 
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> 
> changes in v3:
>   - According to comments from Paul: use mar_nr, instead of
>     last_gfn for p2m_finish_type_change().
>   - According to comments from Jan: use gfn_t as type of
>     first_gfn in p2m_finish_type_change().
>   - According to comments from Jan: simplify the if condition
>     before using p2m_finish_type_change().
> 
> changes in v2:
>   - According to comments from Jan and Andrew: do not use the
>     HVMOP type hypercall continuation method. Instead, adding
>     an opaque in xen_dm_op_map_mem_type_to_ioreq_server to
>     store the gfn.
>   - According to comments from Jan: change routine's comments
>     and name of parameters of p2m_finish_type_change().
> 
> changes in v1:
>   - This patch is splitted from patch 4 of last version.
>   - According to comments from Jan: update the gfn_start for
>     when use hypercall continuation to reset the p2m type.
>   - According to comments from Jan: use min() to compare gfn_end
>     and max mapped pfn in p2m_finish_type_change()
> ---
>  xen/arch/x86/hvm/dm.c     | 41
> ++++++++++++++++++++++++++++++++++++++---
>  xen/arch/x86/mm/p2m.c     | 29 +++++++++++++++++++++++++++++
>  xen/include/asm-x86/p2m.h |  6 ++++++
>  3 files changed, 73 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 7e0da81..d72b7bd 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -384,15 +384,50 @@ static int dm_op(domid_t domid,
> 
>      case XEN_DMOP_map_mem_type_to_ioreq_server:
>      {
> -        const struct xen_dm_op_map_mem_type_to_ioreq_server *data =
> +        struct xen_dm_op_map_mem_type_to_ioreq_server *data =
>              &op.u.map_mem_type_to_ioreq_server;
> +        unsigned long first_gfn = data->opaque;
> +
> +        const_op = false;
> 
>          rc = -EOPNOTSUPP;
>          if ( !hap_enabled(d) )
>              break;
> 
> -        rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> -                                              data->type, data->flags);
> +        if ( first_gfn == 0 )
> +            rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> +                                                  data->type, data->flags);
> +        else
> +            rc = 0;
> +
> +        /*
> +         * Iterate p2m table when an ioreq server unmaps from
> p2m_ioreq_server,
> +         * and reset the remaining p2m_ioreq_server entries back to
> p2m_ram_rw.
> +         */
> +        if ( rc == 0 && data->flags == 0 )
> +        {
> +            struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +            while ( read_atomic(&p2m->ioreq.entry_count) &&
> +                    first_gfn <= p2m->max_mapped_pfn )
> +            {
> +                /* Iterate p2m table for 256 gfns each time. */
> +                p2m_finish_type_change(d, _gfn(first_gfn), 256,
> +                                       p2m_ioreq_server, p2m_ram_rw);
> +
> +                first_gfn += 256;
> +
> +                /* Check for continuation if it's not the last iteration. */
> +                if ( first_gfn <= p2m->max_mapped_pfn &&
> +                     hypercall_preempt_check() )
> +                {
> +                    rc = -ERESTART;
> +                    data->opaque = first_gfn;
> +                    break;
> +                }
> +            }
> +        }
> +
>          break;
>      }
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 7a1bdd8..0daaa86 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1031,6 +1031,35 @@ void p2m_change_type_range(struct domain *d,
>      p2m_unlock(p2m);
>  }
> 
> +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */
> +void p2m_finish_type_change(struct domain *d,
> +                            gfn_t first_gfn, unsigned long max_nr,
> +                            p2m_type_t ot, p2m_type_t nt)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_type_t t;
> +    unsigned long gfn = gfn_x(first_gfn);
> +    unsigned long last_gfn = gfn + max_nr - 1;
> +
> +    ASSERT(ot != nt);
> +    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> +    p2m_lock(p2m);
> +
> +    last_gfn = min(last_gfn, p2m->max_mapped_pfn);
> +    while ( gfn <= last_gfn )
> +    {
> +        get_gfn_query_unlocked(d, gfn, &t);
> +
> +        if ( t == ot )
> +            p2m_change_type_one(d, gfn, t, nt);
> +
> +        gfn++;
> +    }
> +
> +    p2m_unlock(p2m);
> +}
> +
>  /*
>   * Returns:
>   *    0              for success
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index e7e390d..0e670af 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -611,6 +611,12 @@ void p2m_change_type_range(struct domain *d,
>  int p2m_change_type_one(struct domain *d, unsigned long gfn,
>                          p2m_type_t ot, p2m_type_t nt);
> 
> +/* Synchronously change the p2m type for a range of gfns */
> +void p2m_finish_type_change(struct domain *d,
> +                            gfn_t first_gfn,
> +                            unsigned long max_nr,
> +                            p2m_type_t ot, p2m_type_t nt);
> +
>  /* Report a change affecting memory types. */
>  void p2m_memory_type_changed(struct domain *d);
> 
> --
> 1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-04-03  8:16 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-02 12:24 [PATCH v10 0/6] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-04-02 12:24 ` [PATCH v10 1/6] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2017-04-02 12:24 ` [PATCH v10 2/6] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2017-04-03 14:21   ` Jan Beulich
2017-04-05 13:48   ` George Dunlap
2017-04-02 12:24 ` [PATCH v10 3/6] x86/ioreq server: Add device model wrappers for new DMOP Yu Zhang
2017-04-03  8:13   ` Paul Durrant
2017-04-03  9:28     ` Wei Liu
2017-04-05  6:53       ` Yu Zhang
2017-04-05  9:21         ` Jan Beulich
2017-04-05  9:22           ` Yu Zhang
2017-04-05  9:38             ` Jan Beulich
2017-04-05 10:08         ` Wei Liu
2017-04-05 10:20           ` Wei Liu
2017-04-05 10:21             ` Yu Zhang
2017-04-05 10:21           ` Yu Zhang
2017-04-05 10:33             ` Wei Liu
2017-04-05 10:26               ` Yu Zhang
2017-04-05 10:46                 ` Jan Beulich
2017-04-05 10:50                   ` Yu Zhang
2017-04-02 12:24 ` [PATCH v10 4/6] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2017-04-02 12:24 ` [PATCH v10 5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries Yu Zhang
2017-04-03 14:36   ` Jan Beulich
2017-04-03 14:38     ` Jan Beulich
2017-04-05  7:18     ` Yu Zhang
2017-04-05  8:11       ` Jan Beulich
2017-04-05 14:41   ` George Dunlap
2017-04-05 16:22     ` Yu Zhang
2017-04-05 16:35       ` George Dunlap
2017-04-05 16:32         ` Yu Zhang
2017-04-05 17:01           ` George Dunlap
2017-04-05 17:18             ` Yu Zhang
2017-04-05 17:28               ` Yu Zhang
2017-04-05 18:02                 ` Yu Zhang
2017-04-05 18:04                   ` Yu Zhang
2017-04-06  7:48                     ` Jan Beulich
2017-04-06  8:27                       ` Yu Zhang
2017-04-06  8:44                         ` Jan Beulich
2017-04-06  7:43                 ` Jan Beulich
2017-04-05 17:29               ` George Dunlap
2017-04-02 12:24 ` [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2017-04-03  8:16   ` Paul Durrant [this message]
2017-04-03 15:23   ` Jan Beulich
2017-04-05  9:11     ` Yu Zhang
2017-04-05  9:41       ` Jan Beulich
2017-04-05 14:46   ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=73e687b2361b41218c9b1fe09d3b2c62@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yu.c.zhang@linux.intel.com \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).