xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: andres@gridcentric.ca, tim@xen.org, adin@gridcentric.ca
Subject: [PATCH 2 of 4] x86/mm: Allow to not sleep on mem event ring
Date: Wed, 15 Feb 2012 22:57:05 -0500	[thread overview]
Message-ID: <09746decbd28309ff25c.1329364625@xdev.gridcentric.ca> (raw)
In-Reply-To: <patchbomb.1329364623@xdev.gridcentric.ca>

 xen/arch/x86/mm/mem_event.c     |   5 +++--
 xen/include/asm-x86/mem_event.h |  30 +++++++++++++++++++++++++-----
 2 files changed, 28 insertions(+), 7 deletions(-)


Under extreme congestion conditions, generating a mem event may put the vcpu to
sleep on a wait queue if the ring is full. This is generally desirable, although
fairly convoluted to work with, since sleeping on a wait queue requires a
non-atomic context (i.e. no locks held).

Introduce an allow_sleep flag to make this optional. The API remains such that
all current callers set allow_sleep to true and thus will sleep if necessary.

The end-use is for cases in which loss of guest mem events is tolerable. One
such consumer to be added later is the unsharing code under ENOMEM conditions.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Signed-off-by: Adin Scannell <adin@scannell.ca>

diff -r 11fd4e0a1e1a -r 09746decbd28 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -441,9 +441,10 @@ bool_t mem_event_check_ring(struct mem_e
  *               0: a spot has been reserved
  *
  */
-int mem_event_claim_slot(struct domain *d, struct mem_event_domain *med)
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep)
 {
-    if ( current->domain == d )
+    if ( (current->domain == d) && allow_sleep )
         return mem_event_wait_slot(med);
     else
         return mem_event_grab_slot(med, 1);
diff -r 11fd4e0a1e1a -r 09746decbd28 xen/include/asm-x86/mem_event.h
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -28,11 +28,31 @@
 bool_t mem_event_check_ring(struct mem_event_domain *med);
 
 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
- * available space. For success or -EBUSY, the vCPU may be left blocked
- * temporarily to ensure that the ring does not lose future events.  In
- * general, you must follow a claim_slot() call with either put_request() or
- * cancel_slot(), both of which are guaranteed to succeed. */
-int mem_event_claim_slot(struct domain *d, struct mem_event_domain *med);
+ * available space and the caller is a foreign domain. If the guest itself
+ * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
+ * that the ring does not lose future events. 
+ *
+ * However, the allow_sleep flag can be set to false in cases in which it is ok
+ * to lose future events, and thus -EBUSY can be returned to guest vcpus
+ * (handle with care!). 
+ *
+ * In general, you must follow a claim_slot() call with either put_request() or
+ * cancel_slot(), both of which are guaranteed to
+ * succeed. 
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep);
+static inline int mem_event_claim_slot(struct domain *d, 
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 1);
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 0);
+}
 
 void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);

  parent reply	other threads:[~2012-02-16  3:57 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-16  3:57 [PATCH 0 of 4] Handling of (some) low memory conditions Andres Lagar-Cavilla
2012-02-16  3:57 ` [PATCH 1 of 4] Prevent low values of max_pages for domains doing sharing or paging Andres Lagar-Cavilla
2012-02-16  9:16   ` Jan Beulich
2012-02-16 10:20   ` Tim Deegan
2012-02-16 14:45     ` Andres Lagar-Cavilla
2012-02-16 14:58       ` Tim Deegan
2012-02-16 15:32       ` Jan Beulich
2012-02-16 16:08         ` Tim Deegan
2012-02-16 16:44           ` Jan Beulich
2012-02-16  3:57 ` Andres Lagar-Cavilla [this message]
2012-02-16 16:11   ` [PATCH 2 of 4] x86/mm: Allow to not sleep on mem event ring Tim Deegan
2012-02-17 16:57     ` Andres Lagar-Cavilla
2012-02-16  3:57 ` [PATCH 3 of 4] Memory sharing: better handling of ENOMEM while unsharing Andres Lagar-Cavilla
2012-02-16 16:19   ` Tim Deegan
2012-02-17 17:01     ` Andres Lagar-Cavilla
2012-02-16  3:57 ` [PATCH 4 of 4] Global virq for low memory situations Andres Lagar-Cavilla
2012-02-16  9:31 ` [PATCH 0 of 4] Handling of (some) low memory conditions Jan Beulich
2012-02-16 14:40   ` Andres Lagar-Cavilla
2012-02-16 15:22     ` Jan Beulich
2012-02-16 15:34       ` Andres Lagar-Cavilla
2012-02-16 16:26         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=09746decbd28309ff25c.1329364625@xdev.gridcentric.ca \
    --to=andres@lagarcavilla.org \
    --cc=adin@gridcentric.ca \
    --cc=andres@gridcentric.ca \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).