xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Olaf Hering <olaf@aepfle.de>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Cc: andres@gridcentric.ca, xen-devel@lists.xensource.com,
	tim@xen.org, keir.xen@gmail.com, adin@gridcentric.ca
Subject: Re: [PATCH 1 of 4] Improve ring management for memory events
Date: Thu, 24 Nov 2011 20:23:34 +0100	[thread overview]
Message-ID: <20111124192334.GA26208@aepfle.de> (raw)
In-Reply-To: <ec591bf57167a6212502a1eca68d2334.squirrel@webmail.lagarcavilla.org>

On Thu, Nov 24, Andres Lagar-Cavilla wrote:

> > On Wed, Nov 23, Andres Lagar-Cavilla wrote:
> >
> >> Well, we can tone down printk's to be debug level. I don't think they're
> >> unnecessary if they're made an optional debug tool.
> >
> > There is nothing to debug here, since the callers have to retry anyway.
> >
> >> Question: I have one vcpu, how do I fill up the ring quickly? (outside
> >> of
> >> foreign mappings)
> >
> > Have a balloon driver in the guest and balloon down more than
> > 64*PAGE_SIZE. This is the default at least in my setup where the kernel
> > driver releases some memory right away (I havent checked where this is
> > actually configured).
> 
> I see, a guest can call decrease_reservation with an extent_order large
> enough that it will overflow the ring. No matter the size of the ring.
> Isn't preemption of this hypercall a better tactic than putting the vcpu
> on a wait-queue? This won't preclude the need for wait queues, but it
> feels like a much cleaner solution.

Yes, yesterday I was thinking about this as well.
p2m_mem_paging_drop_page() should return -EBUSY. But currently not all
callers of guest_remove_page() look at the exit code. Perhaps that can
be fixed.

> With retrying of foreign mappings in xc_map_foreign_bulk (and grants), I
> wonder if we should put events in the ring due to foreign mappings *at
> all*, in the case of congestion. Eventually a retry will get to kick the
> pager.

What do you mean with that?

Olaf

  reply	other threads:[~2011-11-24 19:23 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-14 21:58 [PATCH 0 of 4] Mem event handling improvements Andres Lagar-Cavilla
2011-11-14 21:58 ` [PATCH 1 of 4] Improve ring management for memory events Andres Lagar-Cavilla
2011-11-23 18:35   ` Olaf Hering
2011-11-23 18:52     ` Andres Lagar-Cavilla
2011-11-23 18:57       ` Olaf Hering
2011-11-24 18:54         ` Andres Lagar-Cavilla
2011-11-24 19:23           ` Olaf Hering [this message]
2011-11-24 19:35             ` Andres Lagar-Cavilla
2011-11-14 21:58 ` [PATCH 2 of 4] Create a generic callback mechanism for Xen-bound event channels Andres Lagar-Cavilla
2011-11-14 21:58 ` [PATCH 3 of 4] Make the prototype of p2m_mem_access_resume consistent Andres Lagar-Cavilla
2011-11-14 21:58 ` [PATCH 4 of 4] Allow memevent responses to be signaled via the event channel Andres Lagar-Cavilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20111124192334.GA26208@aepfle.de \
    --to=olaf@aepfle.de \
    --cc=adin@gridcentric.ca \
    --cc=andres@gridcentric.ca \
    --cc=andres@lagarcavilla.org \
    --cc=keir.xen@gmail.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).