xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@novell.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Charles Arnold <CARNOLD@novell.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory"
Date: Wed, 05 Jan 2011 16:33:50 +0000	[thread overview]
Message-ID: <4D24AB7E020000780002A861@vpn.id2.novell.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1101051601440.2390@kaball-desktop>

>>> On 05.01.11 at 17:22, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 5 Jan 2011, Jan Beulich wrote:
>> >>> On 05.01.11 at 15:37, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Thu, 16 Dec 2010, Keir Fraser wrote:
>> >> On 16/12/2010 20:44, "Charles Arnold" <carnold@novell.com> wrote:
>> >> 
>> >> >>> On 12/16/2010 at 01:33 PM, in message <C9302813.2966F%keir@xen.org>, Keir
>> >> > Fraser <keir@xen.org> wrote:
>> >> >> On 16/12/2010 19:23, "Charles Arnold" <carnold@novell.com> wrote:
>> >> >> 
>> >> >>> The bug is that qemu-dm seems to make the assumption that it can mmap from
>> >> >>> dom0 all the memory with which the guest has been defined instead of the
>> >> >>> memory
>> >> >>> that is actually available on the host.
>> >> >> 
>> >> >> 32-bit dom0? Hm, I thought the qemu mapcache was supposed to limit the 
> total
>> >> >> amount of guest memory mapped at one time, for a 32-bit qemu. For 64-bit
>> >> >> qemu I wouldn't expect to find a limit as low as 3.25G.
>> >> > 
>> >> > Sorry, I should have specified that it is a 64 bit dom0 / hypervisor.
>> >> 
>> >> Okay, well I'm not sure what limit qemu-dm is hitting then. Mapping 3.25G of
>> >> guest memory will only require a few megabytes of pagetables for the qemu
>> >> process in dom0. Perhaps there is a ulimit or something set on the qemu
>> >> process?
>> >> 
>> >> If we can work out and detect this limit, perhaps 64-bit qemu-dm could have
>> >> a mapping cache similar to 32-bit qemu-dm, limited to some fraction of the
>> >> detected mapping limit. And/or, on mapping failure, we could reclaim
>> >> resources by simply zapping the existing cached mappings. Seems there's a
>> >> few options. I don't really maintain qemu-dm myself -- you might get some
>> >> help from Ian Jackson, Stefano, or Anthony Perard if you need more advice.
>> > 
>> > The mapcache size limit should be 64GB on a 64bit qemu-dm.
>> > Any interesting error messages in the qemu logs?
>> 
>> Despite knowing next to nothing about qemu, I'm not certain the
>> mapcache alone matters here: One would expect this to only
>> consume memory for page table construction, but then you
>> wouldn't need Dom0 to have more memory than the guest for the
>> latter to do heavy I/O. There ought to be something that
>> allocates memory in amounts roughly equivalent to what the
>> guest has under I/O.
>  
> Qemu-dm allocates a bounce buffer for each in flight dma
> request, because the aio API used in qemu-dm cannot handle sg lists (it
> is probably the main reason to switch to the new qemu).
> However the bounce buffer is going to be free'd as soon as the dma
> request completes.

But this means it can have very close to the total amount of
memory the guest has in flight on its own. Clearly this should
be throttled based on available memory (just consider you have
multiple such I/O hungry guests).

Jan

  reply	other threads:[~2011-01-05 16:33 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-12-16 19:23 Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory" Charles Arnold
2010-12-16 20:33 ` Keir Fraser
2010-12-16 20:44   ` Charles Arnold
2010-12-16 20:54     ` Keir Fraser
2010-12-17  9:22       ` Jan Beulich
2010-12-17 10:06         ` Keir Fraser
2011-01-05 14:37       ` Stefano Stabellini
2011-01-05 15:30         ` Jan Beulich
2011-01-05 16:22           ` Stefano Stabellini
2011-01-05 16:33             ` Jan Beulich [this message]
2011-01-05 17:48               ` Stefano Stabellini
2011-01-05 18:09                 ` Jan Beulich
2011-01-05 17:17         ` Charles Arnold
2011-01-06 16:50           ` Stefano Stabellini
2011-01-06 17:14             ` Charles Arnold
  -- strict thread matches above, loose matches on Subject: below --
2011-01-05 18:10 Jan Beulich
2011-01-06 20:49 Charles Arnold
2011-01-07  9:35 Jan Beulich
2011-01-07 11:18 ` Stefano Stabellini
2011-01-07 12:37   ` Jan Beulich
2011-01-10 11:11     ` Stefano Stabellini
2011-01-07 17:03   ` Charles Arnold

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D24AB7E020000780002A861@vpn.id2.novell.com \
    --to=jbeulich@novell.com \
    --cc=CARNOLD@novell.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).