From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
Alex Zeffertt <alex.zeffertt@eu.citrix.com>,
Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [PATCH 01/18] xen: reinstate previously unused XENMEM_remove_from_physmap hypercall
Date: Wed, 18 Jan 2012 07:40:22 -0800 [thread overview]
Message-ID: <e1111841be4c16c1c4ebfbee4bc8ed03.squirrel@webmail.lagarcavilla.org> (raw)
In-Reply-To: <mailman.700.1326883173.1471.xen-devel@lists.xensource.com>
> Date: Wed, 18 Jan 2012 10:36:07 +0000
> From: Ian Campbell <Ian.Campbell@citrix.com>
> To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Alex Zeffertt <alex.zeffertt@eu.citrix.com>,
> "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
> Subject: Re: [Xen-devel] [PATCH 01/18] xen: reinstate previously
> unused XENMEM_remove_from_physmap hypercall
> Message-ID: <1326882968.14689.176.camel@zakaz.uk.xensource.com>
> Content-Type: text/plain; charset="UTF-8"
>
> On Thu, 2012-01-12 at 23:35 +0000, Daniel De Graaf wrote:
>> From: Alex Zeffertt <alex.zeffertt@eu.citrix.com>
>>
>> This patch reinstates the XENMEM_remove_from_physmap hypercall
>> which was removed in 19041:ee62aaafff46 because it was not used.
>>
>> However, is now needed in order to support xenstored stub domains.
>> The xenstored stub domain is not priviliged like dom0 and so cannot
>> unilaterally map the xenbus page of other guests into it's address
>> space. Therefore, before creating a domU the domain builder needs to
>> seed its grant table with a grant ref allowing the xenstored stub
>> domain to access the new domU's xenbus page.
>>
>> At present domU's do not start with their grant table mapped.
>> Instead it gets mapped when the guest requests a grant table from
>> the hypervisor.
>>
>> In order to seed the grant table, the domain builder first needs to
>> map it into dom0 address space. But the hypercall to do this
>> requires a gpfn (guest pfn), which is an mfn for PV guest, but a pfn
>> for HVM guests. Therfore, in order to seed the grant table of an
>> HVM guest, dom0 needs to *temporarily* map it into the guest's
>> "physical" address space.
>>
>> Hence the need to reinstate the XENMEM_remove_from_physmap hypercall.
>>
>> Signed-off-by: Alex Zeffertt <alex.zeffertt@eu.citrix.com>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>
> Acked-by: Ian Campbell <ian.campbell@citrix.com> (modulo Jan's comment
> about ordering in xlat.lst)
>
> BTW, since Alex and Diego have subsequently left Citrix you could take
> my Acked-by's in this series as Signed-of-by on behalf of Citrix. I've
> no idea if that's necessary though, I expect not.
>
> Ian.
Nacked-by-me for a couple of reasons, see inline below:
>
>> ---
>> xen/arch/ia64/xen/mm.c | 34
>> ++++++++++++++++++++++++++++++++++
>> xen/arch/x86/mm.c | 35
>> +++++++++++++++++++++++++++++++++++
>> xen/arch/x86/x86_64/compat/mm.c | 14 ++++++++++++++
>> xen/include/public/memory.h | 16 ++++++++++++++++
>> xen/include/xlat.lst | 1 +
>> xen/include/xsm/xsm.h | 6 ++++++
>> xen/xsm/dummy.c | 6 ++++++
>> xen/xsm/flask/hooks.c | 6 ++++++
>> 8 files changed, 118 insertions(+), 0 deletions(-)
>>
>> diff --git a/xen/arch/ia64/xen/mm.c b/xen/arch/ia64/xen/mm.c
>> index 84f6a61..a40f96a 100644
>> --- a/xen/arch/ia64/xen/mm.c
>> +++ b/xen/arch/ia64/xen/mm.c
>> @@ -3448,6 +3448,40 @@ arch_memory_op(int op, XEN_GUEST_HANDLE(void)
>> arg)
>> break;
>> }
>>
>> + case XENMEM_remove_from_physmap:
>> + {
>> + struct xen_remove_from_physmap xrfp;
>> + unsigned long mfn;
>> + struct domain *d;
>> +
>> + if ( copy_from_guest(&xrfp, arg, 1) )
>> + return -EFAULT;
>> +
>> + rc = rcu_lock_target_domain_by_id(xrfp.domid, &d);
>> + if ( rc != 0 )
>> + return rc;
>> +
>> + if ( xsm_remove_from_physmap(current->domain, d) )
>> + {
>> + rcu_unlock_domain(d);
>> + return -EPERM;
>> + }
>> +
>> + domain_lock(d);
>> +
>> + mfn = gmfn_to_mfn(d, xrfp.gpfn);
Compilation will fail. gmfn_to_mfn has been deprecated in x86. You need to
wrap with an ifdef (ia64 still uses gmfn_to_mfn), and in the x86 case use
get_gfn_untyped.
>> +
>> + if ( mfn_valid(mfn) )
>> + guest_physmap_remove_page(d, xrfp.gpfn, mfn, 0);
>> +
And, you need to invoke put_gfn on your way out (no ifdef, ia64 has the stub)
Thanks!
Andres
>> + domain_unlock(d);
>> +
>> + rcu_unlock_domain(d);
>> +
>> + break;
>> + }
>> +
>> +
>> case XENMEM_machine_memory_map:
>> {
>> struct xen_memory_map memmap;
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 1f996e0..39cc3c0 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4871,6 +4871,41 @@ long arch_memory_op(int op,
>> XEN_GUEST_HANDLE(void) arg)
>> return rc;
>> }
>>
>> + case XENMEM_remove_from_physmap:
>> + {
>> + struct xen_remove_from_physmap xrfp;
>> + unsigned long mfn;
>> + struct domain *d;
>> +
>> + if ( copy_from_guest(&xrfp, arg, 1) )
>> + return -EFAULT;
>> +
>> + rc = rcu_lock_target_domain_by_id(xrfp.domid, &d);
>> + if ( rc != 0 )
>> + return rc;
>> +
>> + if ( xsm_remove_from_physmap(current->domain, d) )
>> + {
>> + rcu_unlock_domain(d);
>> + return -EPERM;
>> + }
>> +
>> + domain_lock(d);
>> +
>> + mfn = get_gfn_untyped(d, xrfp.gpfn);
>> +
>> + if ( mfn_valid(mfn) )
>> + guest_physmap_remove_page(d, xrfp.gpfn, mfn,
>> PAGE_ORDER_4K);
>> +
>> + put_gfn(d, xrfp.gpfn);
>> +
>> + domain_unlock(d);
>> +
>> + rcu_unlock_domain(d);
>> +
>> + break;
>> + }
>> +
>> case XENMEM_set_memory_map:
>> {
>> struct xen_foreign_memory_map fmap;
>> diff --git a/xen/arch/x86/x86_64/compat/mm.c
>> b/xen/arch/x86/x86_64/compat/mm.c
>> index bea94fe..dde5430 100644
>> --- a/xen/arch/x86/x86_64/compat/mm.c
>> +++ b/xen/arch/x86/x86_64/compat/mm.c
>> @@ -82,6 +82,20 @@ int compat_arch_memory_op(int op,
>> XEN_GUEST_HANDLE(void) arg)
>> break;
>> }
>>
>> + case XENMEM_remove_from_physmap:
>> + {
>> + struct compat_remove_from_physmap cmp;
>> + struct xen_remove_from_physmap *nat = (void
>> *)COMPAT_ARG_XLAT_VIRT_BASE;
>> +
>> + if ( copy_from_guest(&cmp, arg, 1) )
>> + return -EFAULT;
>> +
>> + XLAT_remove_from_physmap(nat, &cmp);
>> + rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
>> +
>> + break;
>> + }
>> +
>> case XENMEM_set_memory_map:
>> {
>> struct compat_foreign_memory_map cmp;
>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> index c5b78a8..308deff 100644
>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -229,6 +229,22 @@ struct xen_add_to_physmap {
>> typedef struct xen_add_to_physmap xen_add_to_physmap_t;
>> DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_t);
>>
>> +/*
>> + * Unmaps the page appearing at a particular GPFN from the specified
>> guest's
>> + * pseudophysical address space.
>> + * arg == addr of xen_remove_from_physmap_t.
>> + */
>> +#define XENMEM_remove_from_physmap 15
>> +struct xen_remove_from_physmap {
>> + /* Which domain to change the mapping for. */
>> + domid_t domid;
>> +
>> + /* GPFN of the current mapping of the page. */
>> + xen_pfn_t gpfn;
>> +};
>> +typedef struct xen_remove_from_physmap xen_remove_from_physmap_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_remove_from_physmap_t);
>> +
>> /*** REMOVED ***/
>> /*#define XENMEM_translate_gpfn_list 8*/
>>
>> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
>> index 3d92175..ee1772c 100644
>> --- a/xen/include/xlat.lst
>> +++ b/xen/include/xlat.lst
>> @@ -81,6 +81,7 @@
>> ! processor_power platform.h
>> ? processor_px platform.h
>> ! psd_package platform.h
>> +! remove_from_physmap memory.h
>> ? xenpf_pcpuinfo platform.h
>> ? xenpf_pcpu_version platform.h
>> ! sched_poll sched.h
>> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
>> index df6cec2..566c808 100644
>> --- a/xen/include/xsm/xsm.h
>> +++ b/xen/include/xsm/xsm.h
>> @@ -169,6 +169,7 @@ struct xsm_operations {
>> int (*update_va_mapping) (struct domain *d, struct domain *f,
>> l1_pgentry_t
>> pte);
>> int (*add_to_physmap) (struct domain *d1, struct domain *d2);
>> + int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
>> int (*sendtrigger) (struct domain *d);
>> int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq
>> *bind);
>> int (*unbind_pt_irq) (struct domain *d);
>> @@ -738,6 +739,11 @@ static inline int xsm_add_to_physmap(struct domain
>> *d1, struct domain *d2)
>> return xsm_call(add_to_physmap(d1, d2));
>> }
>>
>> +static inline int xsm_remove_from_physmap(struct domain *d1, struct
>> domain *d2)
>> +{
>> + return xsm_call(remove_from_physmap(d1, d2));
>> +}
>> +
>> static inline int xsm_sendtrigger(struct domain *d)
>> {
>> return xsm_call(sendtrigger(d));
>> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
>> index 4bbfbff..65daa4e 100644
>> --- a/xen/xsm/dummy.c
>> +++ b/xen/xsm/dummy.c
>> @@ -529,6 +529,11 @@ static int dummy_add_to_physmap (struct domain *d1,
>> struct domain *d2)
>> return 0;
>> }
>>
>> +static int dummy_remove_from_physmap (struct domain *d1, struct domain
>> *d2)
>> +{
>> + return 0;
>> +}
>> +
>> static int dummy_sendtrigger (struct domain *d)
>> {
>> return 0;
>> @@ -690,6 +695,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>> set_to_dummy_if_null(ops, mmu_machphys_update);
>> set_to_dummy_if_null(ops, update_va_mapping);
>> set_to_dummy_if_null(ops, add_to_physmap);
>> + set_to_dummy_if_null(ops, remove_from_physmap);
>> set_to_dummy_if_null(ops, sendtrigger);
>> set_to_dummy_if_null(ops, bind_pt_irq);
>> set_to_dummy_if_null(ops, pin_mem_cacheattr);
>> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
>> index 0d35767..a2020a9 100644
>> --- a/xen/xsm/flask/hooks.c
>> +++ b/xen/xsm/flask/hooks.c
>> @@ -1283,6 +1283,11 @@ static int flask_add_to_physmap(struct domain
>> *d1, struct domain *d2)
>> return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
>> }
>>
>> +static int flask_remove_from_physmap(struct domain *d1, struct domain
>> *d2)
>> +{
>> + return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
>> +}
>> +
>> static int flask_sendtrigger(struct domain *d)
>> {
>> return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
>> DOMAIN__TRIGGER);
>> @@ -1550,6 +1555,7 @@ static struct xsm_operations flask_ops = {
>> .mmu_machphys_update = flask_mmu_machphys_update,
>> .update_va_mapping = flask_update_va_mapping,
>> .add_to_physmap = flask_add_to_physmap,
>> + .remove_from_physmap = flask_remove_from_physmap,
>> .sendtrigger = flask_sendtrigger,
>> .get_device_group = flask_get_device_group,
>> .test_assign_device = flask_test_assign_device,
>
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 18 Jan 2012 11:40:06 +0100
> From: Wei Wang <wei.wang2@amd.com>
> To: Dario Faggioli <raistlin@linux.it>
> Cc: Tim Deegan <tim@xen.org>, "allen.m.kay@intel.com"
> <allen.m.kay@intel.com>, xen-devel@lists.xensource.com, Keir Fraser
> <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
> Subject: Re: [Xen-devel] [PATCHv2 2 of 2] Move IOMMU faults handling
> into softirq for AMD-Vi.
> Message-ID: <4F16A186.4080303@amd.com>
> Content-Type: text/plain; charset="UTF-8"; format=flowed
>
> On 01/18/2012 09:53 AM, Dario Faggioli wrote:
>> On Tue, 2012-01-17 at 11:17 +0000, Keir Fraser wrote:
>>>> Dealing with interrupts from AMD-Vi IOMMU(s) is deferred to a
>>>> softirq-tasklet,
>>>> raised by the actual IRQ handler. To avoid more interrupts being
>>>> generated
>>>> (because of further faults), they must be masked in the IOMMU within
>>>> the low
>>>> level IRQ handler and enabled back in the tasklet body. Notice that
>>>> this may
>>>> cause the log to overflow, but none of the existing entry will be
>>>> overwritten.
>>>>
>>>> Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
>>>
>>> This patch needs fixing to apply to xen-unstable tip. Please do that
>>> and
>>> resubmit.
>>>
>> I see. I can easily rebase the patch but there are functional changes
>> involved, so I'd like to know what you think it's best to do first.
>>
>> In particular, the clash is against Wei's patches introducing PPR. So
>> now the IOMMU interrupt handler checks both event log and ppr log.
>>
>> Question is, should I move _BOTH_ these checks into softirq or just
>> defer event log processing, and leave ppr log handling in hard-irq
>> context? Quickly looking at the new specs, it seems to me that deferring
>> both should be fine, but I'd really appreciate your thoughts...
>
> I think put both event log and ppr log into softirq is fine. If you
> could have a patch like this, I can do a quick test on my machine.
> Thanks,
> Wei
>
>> Wei, Jan, Tim?
>>
>> Thanks and regards,
>> Dario
>>
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 18 Jan 2012 10:39:01 +0000
> From: Ian Campbell <Ian.Campbell@citrix.com>
> To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Alex Zeffertt <alex.zeffertt@eu.citrix.com>,
> "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Diego
> Ongaro <diego.ongaro@citrix.com>
> Subject: Re: [Xen-devel] [PATCH 02/18] xen: allow global VIRQ handlers
> to be delegated to other domains
> Message-ID: <1326883141.14689.177.camel@zakaz.uk.xensource.com>
> Content-Type: text/plain; charset="UTF-8"
>
> On Thu, 2012-01-12 at 23:35 +0000, Daniel De Graaf wrote:
>>
>> +static void clear_global_virq_handlers(struct domain *d)
>> +{
>> + uint32_t virq;
>> + int put_count = 0;
>> +
>> + spin_lock(&global_virq_handlers_lock);
>> +
>> + for (virq = 0; virq < NR_VIRQS; virq++) {
>> + if (global_virq_handlers[virq] == d) {
>> + global_virq_handlers[virq] = NULL;
>
> I don't suppose we should rebind to dom0, should we?
>
> Seems like we are pretty hosed if this ever happens in a non-controlled
> manner anyway...
>
>> + put_count++;
>> + }
>> + }
>> +
>> + spin_unlock(&global_virq_handlers_lock);
>> +
>> + while (put_count) {
>> + put_domain(d);
>> + put_count--;
>> + }
>> +}
>
>
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 18 Jan 2012 11:39:22 +0100
> From: Juergen Gross <juergen.gross@ts.fujitsu.com>
> To: Keir Fraser <keir.xen@gmail.com>
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH] Allow wake up of offline vcpu via
> nmi-ipi
> Message-ID: <4F16A15A.3040405@ts.fujitsu.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 01/18/2012 10:36 AM, Keir Fraser wrote:
>> On 18/01/2012 09:31, "Keir Fraser"<keir.xen@gmail.com> wrote:
>>
>>> On 18/01/2012 09:07, "Juergen Gross"<juergen.gross@ts.fujitsu.com>
>>> wrote:
>>>
>>>> On 01/18/2012 09:48 AM, Juergen Gross wrote:
>>>>> On a real machine a cpu disabled via hlt with interrupts disabled can
>>>>> be
>>>>> reactivated via a nmi ipi. Enable the hypervisor to do this for hvm,
>>>>> too.
>>>>>
>>>>> Signed-off-by: juergen.gross@ts.fujitsu.com
>>>>>
>>>>>
>>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>> xen/arch/x86/hvm/vlapic.c | 5 ++++-
>>>> BTW: I was not able to reactivate a vcpu via INIT/SIPI/SIPI sequence.
>>>> It
>>>> works
>>>> on initial system boot when the target vcpu is activated the first
>>>> time. If I
>>>> deactivate a vcpu and try to activate it again it will start to run,
>>>> but it
>>>> is
>>>> not starting at the specified entry point (at least it isn't
>>>> performing the
>>>> first instruction there).
>>>> Is there some special initialization needed to make this work? Do I
>>>> have to
>>>> reset
>>>> something on the vcpu before deactivating it?
>>> No it should just work. Hvmloader wakes and then sleeps every AP, in
>>> hvmloader/smp.c. So even the first INIT-SIPI wakeup of an AP in the
>>> guest OS
>>> is not the first, as hvmloader already did it once! So this path should
>>> be
>>> working and indeed tested on every HVM guest boot.
>> Bit more info: INIT-SIPI logic is complicated by needing to avoid
>> deadlocks
>> between two VCPUs attempting to pause and reset each other. But the core
>> dispatch logic is in vlapic_init_sipi_action(). You will see that on
>> INIT,
>> we should call vcpu_reset() which will de-initialise and VCPU_down the
>> vcpu.
>> And then on SIPI we call hvm_vcpu_reset_state(), which should
>> reinitialise
>> and wake the vcpu to start running at the specified CS:IP.
>>
>> So the above will be good places for you to add tracing and work out
>> what's
>> going on. :-)
>
> Yeah, thanks for the confirmation this should work.
> Printing some diagnostics helped me to spot the error in my code.
>
>
> Juergen
>
> --
> Juergen Gross Principal Developer Operating Systems
> PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
> Fujitsu Technology Solutions e-mail:
> juergen.gross@ts.fujitsu.com
> Domagkstr. 28 Internet: ts.fujitsu.com
> D-80807 Muenchen Company details:
> ts.fujitsu.com/imprint.html
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
> End of Xen-devel Digest, Vol 83, Issue 277
> ******************************************
>
next parent reply other threads:[~2012-01-18 15:40 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.700.1326883173.1471.xen-devel@lists.xensource.com>
2012-01-18 15:40 ` Andres Lagar-Cavilla [this message]
[not found] <mailman.745.1326901231.1471.xen-devel@lists.xensource.com>
2012-01-18 15:47 ` [PATCH 01/18] xen: reinstate previously unused XENMEM_remove_from_physmap hypercall Andres Lagar-Cavilla
2012-01-11 17:21 [RFC PATCH 0/18] Xenstore stub domain Daniel De Graaf
2012-01-11 17:21 ` [PATCH 01/18] xen: reinstate previously unused XENMEM_remove_from_physmap hypercall Daniel De Graaf
2012-01-12 8:22 ` Jan Beulich
2012-01-12 23:35 ` [PATCH v2 00/18] Xenstore stub domain Daniel De Graaf
2012-01-12 23:35 ` [PATCH 01/18] xen: reinstate previously unused XENMEM_remove_from_physmap hypercall Daniel De Graaf
2012-01-13 7:56 ` Jan Beulich
2012-01-18 10:36 ` Ian Campbell
2012-01-18 14:56 ` Daniel De Graaf
2012-01-18 16:06 ` Ian Campbell
2012-01-18 19:07 ` Daniel De Graaf
2012-01-19 10:32 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e1111841be4c16c1c4ebfbee4bc8ed03.squirrel@webmail.lagarcavilla.org \
--to=andres@lagarcavilla.org \
--cc=Ian.Campbell@citrix.com \
--cc=alex.zeffertt@eu.citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).