From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v3 4/6] ioreq-server: on-demand creation of ioreq server
Date: Mon, 10 Mar 2014 17:46:02 +0000 [thread overview]
Message-ID: <CAFLBxZbasSngp-UkO4FaVKCHKzg1ispRDaErLgc00pMUnL1DKA@mail.gmail.com> (raw)
In-Reply-To: <1394030881-5498-5-git-send-email-paul.durrant@citrix.com>
On Wed, Mar 5, 2014 at 2:47 PM, Paul Durrant <paul.durrant@citrix.com> wrote:
> This patch only creates the ioreq server when the legacy HVM parameters
> are touched by an emulator. It also lays some groundwork for supporting
> multiple IOREQ servers. For instance, it introduces ioreq server reference
> counting which is not strictly necessary at this stage but will become so
> when ioreq servers can be destroyed prior the domain dying.
>
> There is a significant change in the layout of the special pages reserved
> in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards without
> moving pages such as the xenstore page when building a domain that can
> support more than one emulator.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> tools/libxc/xc_hvm_build_x86.c | 40 +++++--
> xen/arch/x86/hvm/hvm.c | 240 +++++++++++++++++++++++++---------------
> 2 files changed, 176 insertions(+), 104 deletions(-)
>
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index dd3b522..b65e702 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -41,13 +41,12 @@
> #define SPECIALPAGE_PAGING 0
> #define SPECIALPAGE_ACCESS 1
> #define SPECIALPAGE_SHARING 2
> -#define SPECIALPAGE_BUFIOREQ 3
> -#define SPECIALPAGE_XENSTORE 4
> -#define SPECIALPAGE_IOREQ 5
> -#define SPECIALPAGE_IDENT_PT 6
> -#define SPECIALPAGE_CONSOLE 7
> -#define NR_SPECIAL_PAGES 8
> -#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> +#define SPECIALPAGE_XENSTORE 3
> +#define SPECIALPAGE_IDENT_PT 4
> +#define SPECIALPAGE_CONSOLE 5
> +#define SPECIALPAGE_IOREQ 6
> +#define NR_SPECIAL_PAGES SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
> +#define special_pfn(x) (0xff000u - 1 - (x))
>
> #define VGA_HOLE_SIZE (0x20)
>
> @@ -114,7 +113,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
> /* Memory parameters. */
> hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
> hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> - hvm_info->reserved_mem_pgstart = special_pfn(0);
> + hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
It might be better to #define this up above, just below special_pfn()?
If we do that, and make the change below, then the "direction of
growth" will be encoded all in one place.
>
> /* Finish with the checksum. */
> for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> @@ -473,6 +472,23 @@ static int setup_guest(xc_interface *xch,
> munmap(hvm_info_page, PAGE_SIZE);
>
> /* Allocate and clear special pages. */
> +
> + DPRINTF("%d SPECIAL PAGES:\n", NR_SPECIAL_PAGES);
> + DPRINTF(" PAGING: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING));
> + DPRINTF(" ACCESS: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS));
> + DPRINTF(" SHARING: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING));
> + DPRINTF(" STORE: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE));
> + DPRINTF(" IDENT_PT: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT));
> + DPRINTF(" CONSOLE: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE));
> + DPRINTF(" IOREQ: %"PRI_xen_pfn"\n",
> + (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> +
> for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> {
> xen_pfn_t pfn = special_pfn(i);
> @@ -488,10 +504,6 @@ static int setup_guest(xc_interface *xch,
>
> xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
> special_pfn(SPECIALPAGE_XENSTORE));
> - xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> - special_pfn(SPECIALPAGE_BUFIOREQ));
> - xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> - special_pfn(SPECIALPAGE_IOREQ));
> xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
> special_pfn(SPECIALPAGE_CONSOLE));
> xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
> @@ -500,6 +512,10 @@ static int setup_guest(xc_interface *xch,
> special_pfn(SPECIALPAGE_ACCESS));
> xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
> special_pfn(SPECIALPAGE_SHARING));
> + xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> + special_pfn(SPECIALPAGE_IOREQ));
> + xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> + special_pfn(SPECIALPAGE_IOREQ) - 1);
If we say "special_pfn(SPECIALPAGE_IOREQ+1)", it doesn't assume that
things grow a specific direction.
>
> /*
> * Identity-map page table is required for running with CR0.PG=0 when
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index bbf9577..22b2a2c 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -366,22 +366,9 @@ bool_t hvm_io_pending(struct vcpu *v)
> return ( p->state != STATE_IOREQ_NONE );
> }
>
> -void hvm_do_resume(struct vcpu *v)
> +static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
> {
> - struct domain *d = v->domain;
> - struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> - ioreq_t *p;
> -
> - check_wakeup_from_wait();
> -
> - if ( is_hvm_vcpu(v) )
> - pt_restore_timer(v);
> -
> - if ( !s )
> - goto check_inject_trap;
> -
> /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> - p = get_ioreq(s, v->vcpu_id);
> while ( p->state != STATE_IOREQ_NONE )
> {
> switch ( p->state )
> @@ -397,12 +384,29 @@ void hvm_do_resume(struct vcpu *v)
> break;
> default:
> gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
> - domain_crash(v->domain);
> + domain_crash(d);
> return; /* bail */
> }
> }
> +}
> +
> +void hvm_do_resume(struct vcpu *v)
> +{
> + struct domain *d = v->domain;
> + struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> + check_wakeup_from_wait();
> +
> + if ( is_hvm_vcpu(v) )
> + pt_restore_timer(v);
> +
> + if ( s )
> + {
> + ioreq_t *p = get_ioreq(s, v->vcpu_id);
> +
> + hvm_wait_on_io(d, p);
> + }
>
> - check_inject_trap:
> /* Inject pending hw/sw trap */
> if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> {
> @@ -411,11 +415,13 @@ void hvm_do_resume(struct vcpu *v)
> }
> }
>
> -static void hvm_init_ioreq_page(
> - struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
> {
> + struct hvm_ioreq_page *iorp;
> +
> + iorp = buf ? &s->buf_ioreq : &s->ioreq;
> +
> spin_lock_init(&iorp->lock);
> - domain_pause(d);
> }
>
> void destroy_ring_for_helper(
> @@ -431,16 +437,13 @@ void destroy_ring_for_helper(
> }
> }
>
> -static void hvm_destroy_ioreq_page(
> - struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
> {
> - spin_lock(&iorp->lock);
> + struct hvm_ioreq_page *iorp;
>
> - ASSERT(d->is_dying);
> + iorp = buf ? &s->buf_ioreq : &s->ioreq;
>
> destroy_ring_for_helper(&iorp->va, iorp->page);
> -
> - spin_unlock(&iorp->lock);
BTW, is there a reason you're getting rid of the locks here?
-George
next prev parent reply other threads:[~2014-03-10 17:46 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-05 14:47 [PATCH v3 0/6] Support for running secondary emulators Paul Durrant
2014-03-05 14:47 ` [PATCH v3 1/6] ioreq-server: centralize access to ioreq structures Paul Durrant
2014-03-05 14:47 ` [PATCH v3 2/6] ioreq-server: tidy up use of ioreq_t Paul Durrant
2014-03-10 15:43 ` George Dunlap
2014-03-10 15:46 ` Paul Durrant
2014-03-10 15:53 ` George Dunlap
2014-03-10 16:04 ` Paul Durrant
2014-03-10 16:56 ` George Dunlap
2014-03-11 10:06 ` Paul Durrant
2014-03-05 14:47 ` [PATCH v3 3/6] ioreq-server: create basic ioreq server abstraction Paul Durrant
2014-03-05 14:47 ` [PATCH v3 4/6] ioreq-server: on-demand creation of ioreq server Paul Durrant
2014-03-10 17:46 ` George Dunlap [this message]
2014-03-11 10:54 ` Paul Durrant
2014-03-14 11:04 ` Ian Campbell
2014-03-14 13:28 ` Paul Durrant
2014-03-14 11:18 ` Ian Campbell
2014-03-14 13:30 ` Paul Durrant
2014-03-05 14:48 ` [PATCH v3 5/6] ioreq-server: add support for multiple servers Paul Durrant
2014-03-14 11:52 ` Ian Campbell
2014-03-17 11:45 ` George Dunlap
2014-03-17 12:25 ` Paul Durrant
2014-03-17 12:35 ` Ian Campbell
2014-03-17 12:51 ` Paul Durrant
2014-03-17 12:53 ` Ian Campbell
2014-03-17 13:56 ` Paul Durrant
2014-03-17 14:44 ` Ian Campbell
2014-03-17 14:52 ` Paul Durrant
2014-03-17 14:55 ` Ian Campbell
2014-03-18 11:33 ` Paul Durrant
2014-03-18 13:24 ` George Dunlap
2014-03-18 13:38 ` Paul Durrant
2014-03-18 13:45 ` Paul Durrant
2014-03-20 11:11 ` Tim Deegan
2014-03-20 11:22 ` Paul Durrant
2014-03-20 12:10 ` Paul Durrant
2014-03-05 14:48 ` [PATCH v3 6/6] ioreq-server: bring the PCI hotplug controller implementation into Xen Paul Durrant
2014-03-14 11:57 ` Ian Campbell
2014-03-14 13:25 ` Paul Durrant
2014-03-14 14:08 ` Ian Campbell
2014-03-14 14:31 ` Paul Durrant
2014-03-14 15:01 ` Ian Campbell
2014-03-14 15:18 ` Paul Durrant
2014-03-14 18:06 ` Konrad Rzeszutek Wilk
2014-03-17 11:13 ` Paul Durrant
2014-03-10 18:57 ` [PATCH v3 0/6] Support for running secondary emulators George Dunlap
2014-03-11 10:48 ` Paul Durrant
2014-03-14 11:02 ` Ian Campbell
2014-03-14 13:26 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFLBxZbasSngp-UkO4FaVKCHKzg1ispRDaErLgc00pMUnL1DKA@mail.gmail.com \
--to=george.dunlap@eu.citrix.com \
--cc=paul.durrant@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).