From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
Jean Guyader <jean.guyader@eu.citrix.com>
Cc: tim@xen.org, xen-devel@lists.xensource.com, allen.m.kay@intel.com
Subject: Re: [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range
Date: Thu, 10 Nov 2011 10:21:31 +0000 [thread overview]
Message-ID: <CAE1562B.33DB1%keir@xen.org> (raw)
In-Reply-To: <4EBBAD770200007800060155@nat28.tlf.novell.com>
On 10/11/2011 09:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 10.11.11 at 09:44, Jean Guyader <jean.guyader@eu.citrix.com> wrote:
>
> In the native implementation I neither see the XENMAPSPACE_gmfn_range
> case getting actually handled in the main switch (did you mean to change
> xatp.space to XENMAPSPACE_gmfn in that case?), nor do I see how you
> communicate back how many of the pages were successfully processed in
> the event of an error in the middle of the processing or when a
> continuation is required.
>
> But with the patch being pretty hard to read, maybe I'm simply
> overlooking something?
>
> Further (I realize I should have commented on this earlier) I think that
> in order to allow forward progress you should not check for preemption
> on the very first iteration of each (re-)invocation. That would also
> guarantee no behavioral change to the original single-page variants.
There are plenty of other examples where we check for preemption before
doing any real work (eg. do_mmuext_op, do_mmu_update). I guess checking at
the end of the loop is a little bit better maybe. I'm not very bothered
either way.
-- Keir
>> --- a/xen/arch/x86/x86_64/compat/mm.c
>> +++ b/xen/arch/x86/x86_64/compat/mm.c
>> @@ -63,6 +63,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void)
>> arg)
>>
>> XLAT_add_to_physmap(nat, &cmp);
>> rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
>> + if ( rc < 0 )
>> + return rc;
>> +
>> + if ( rc == __HYPERVISOR_memory_op )
>> + hypercall_xlat_continuation(NULL, 0x2, nat, arg);
>> +
>> + XLAT_add_to_physmap(&cmp, nat);
>> +
>> + if ( copy_to_guest(arg, &cmp, 1) )
>> + return -EFAULT;
>
> Other than in the XENMEM_[gs]et_pod_target you (so far, subject to the
> above comment resulting in a behavioral change) don't have any real
> outputs here, and hence there's no need to always to the outbound
> translation - i.e. all of this could be moved into the if ()'s body.
>
> Jan
>
>>
>> break;
>> }
>
next prev parent reply other threads:[~2011-11-10 10:21 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-10 8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-10 8:44 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-10 8:44 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-10 8:44 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-10 8:44 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-10 9:54 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jan Beulich
2011-11-10 10:15 ` Tim Deegan
2011-11-10 10:20 ` Jean Guyader
2011-11-10 10:18 ` Jean Guyader
2011-11-10 10:21 ` Keir Fraser [this message]
2011-11-10 10:48 ` Jan Beulich
-- strict thread matches above, loose matches on Subject: below --
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-13 17:40 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-13 17:40 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-14 10:11 ` Jan Beulich
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-08 20:04 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 20:04 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-09 10:09 ` Tim Deegan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAE1562B.33DB1%keir@xen.org \
--to=keir@xen.org \
--cc=JBeulich@suse.com \
--cc=allen.m.kay@intel.com \
--cc=jean.guyader@eu.citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).