From: David Vrabel <david.vrabel@citrix.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Andres Lagar-Cavilla <andres@lagarcavilla.com>,
<boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] Xen: Fix retry calls into PRIVCMD_MMAPBATCH*.
Date: Thu, 1 Aug 2013 13:04:45 +0100 [thread overview]
Message-ID: <51FA4EDD.4040509@citrix.com> (raw)
In-Reply-To: <5373EBB3-5CF1-4A2F-B821-1DD788877F75@gridcentric.ca>
On 01/08/13 12:49, Andres Lagar-Cavilla wrote:
> On Aug 1, 2013, at 7:23 AM, David Vrabel <david.vrabel@citrix.com> wrote:
>
>> On 01/08/13 04:30, Andres Lagar-Cavilla wrote:
>>> -- Resend as I haven't seen this hit the lists. Maybe some smtp misconfig. Apologies. Also expanded cc --
>>>
>>> When a foreign mapper attempts to map guest frames that are paged out,
>>> the mapper receives an ENOENT response and will have to try again
>>> while a helper process pages the target frame back in.
>>>
>>> Gating checks on PRIVCMD_MMAPBATCH* ioctl args were preventing retries
>>> of mapping calls.
>>
>> This breaks the auto_translated_physmap case as will allocate another
>> set of empty pages and leak the previous set.
>
> David,
> not able to follow you here. Under what circumstances will another
> set of empty pages be allocated? And where? are we talking page table pages?
....
vma = find_vma(mm, m.addr);
if (!vma ||
vma->vm_ops != &privcmd_vm_ops ||
(m.addr != vma->vm_start) ||
((m.addr + (nr_pages << PAGE_SHIFT)) != vma->vm_end) ||
!privcmd_enforce_singleshot_mapping(vma)) {
up_write(&mm->mmap_sem);
ret = -EINVAL;
goto out;
}
if (xen_feature(XENFEAT_auto_translated_physmap)) {
ret = alloc_empty_pages(vma, m.num);
Here.
if (ret < 0) {
up_write(&mm->mmap_sem);
goto out;
}
}
>> This privcmd_enforce_singleshot_mapping() stuff seems very odd anyway.
>> Does anyone know what it was for originally? It would be preferrable if
>> we could update the mappings with a new set of foreign MFNs without
>> having to tear down the VMA and recreate a new VMA.
>
> I believe it's mostly historical. I agree with you on principle, but recreating VMAs is super-cheap.
Tearing them down is not cheap as each page requires a trap-and-emulate
to clear the PTE (see ptep_get_and_clear_full() in zap_pte_range()).
David
next prev parent reply other threads:[~2013-08-01 12:04 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1375203632-23854-1-git-send-email-andres@lagarcavilla.org>
2013-08-01 3:30 ` [PATCH] Xen: Fix retry calls into PRIVCMD_MMAPBATCH* Andres Lagar-Cavilla
2013-08-01 11:23 ` David Vrabel
2013-08-01 11:49 ` Andres Lagar-Cavilla
2013-08-01 12:04 ` David Vrabel [this message]
2013-08-01 13:30 ` Andres Lagar-Cavilla
2013-08-01 14:26 Andres Lagar-Cavilla
2013-08-09 10:30 ` David Vrabel
[not found] <1376057488-1008-1-git-send-email-andreslc@gridcentric.ca>
2013-08-12 15:58 ` David Vrabel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51FA4EDD.4040509@citrix.com \
--to=david.vrabel@citrix.com \
--cc=andres@lagarcavilla.com \
--cc=andreslc@gridcentric.ca \
--cc=boris.ostrovsky@oracle.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox