From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
James Harper <james.harper@bendigoit.com.au>,
Ian Campbell <ian.campbell@citrix.com>,
Ian Campbell <ian.campbell@xensource.com>,
Scott Rixner <rixner@rice.edu>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Keir Fraser <keir.xen@gmail.com>
Subject: [PATCH 1/5] xen: events: Process event channels notifications in round-robin order.
Date: Thu, 3 Mar 2011 17:10:11 +0000 [thread overview]
Message-ID: <1299172215-29470-1-git-send-email-ian.campbell@citrix.com> (raw)
In-Reply-To: <1299172198.6552.14.camel@zakaz.uk.xensource.com>
From: Scott Rixner <rixner@rice.edu>
Avoids fairness issue resulting from domain 0 processing lowest
numbered event channel first.
Bugzilla #1115 "Event channel port scanning unfair".
Signed-off-by: Ian Campbell <ian.campbell@xensource.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
[ijc: forward ported from linux-2.6.18-xen.hg 324:7fe1c6d02a2b
various variables have different names in this tree:
l1 -> pending_words
l2 -> pending_bits
l1i -> word_idx
l2i -> bit_idx
]
---
drivers/xen/events.c | 72 +++++++++++++++++++++++++++++++++++++++++++++-----
1 files changed, 65 insertions(+), 7 deletions(-)
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 6befe62..75cc6f5 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1028,6 +1028,11 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
static DEFINE_PER_CPU(unsigned, xed_nesting_count);
/*
+ * Mask out the i least significant bits of w
+ */
+#define MASK_LSBS(w, i) (w & ((~0UL) << i))
+
+/*
* Search the CPUs pending events bitmasks. For each one found, map
* the event number to an irq, and feed it into do_IRQ() for
* handling.
@@ -1038,6 +1043,9 @@ static DEFINE_PER_CPU(unsigned, xed_nesting_count);
*/
static void __xen_evtchn_do_upcall(void)
{
+ static unsigned int last_word_idx = BITS_PER_LONG - 1;
+ static unsigned int last_bit_idx = BITS_PER_LONG - 1;
+ int word_idx, bit_idx;
int cpu = get_cpu();
struct shared_info *s = HYPERVISOR_shared_info;
struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
@@ -1056,17 +1064,50 @@ static void __xen_evtchn_do_upcall(void)
wmb();
#endif
pending_words = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+ word_idx = last_word_idx;
+ bit_idx = last_bit_idx;
+
while (pending_words != 0) {
unsigned long pending_bits;
- int word_idx = __ffs(pending_words);
- pending_words &= ~(1UL << word_idx);
+ unsigned long words;
+
+ word_idx = (word_idx + 1) % BITS_PER_LONG;
+ words = MASK_LSBS(pending_words, word_idx);
+
+ /*
+ * If we masked out all events, wrap around to the
+ * beginning.
+ */
+ if (words == 0) {
+ word_idx = BITS_PER_LONG - 1;
+ bit_idx = BITS_PER_LONG - 1;
+ continue;
+ }
+ word_idx = __ffs(words);
- while ((pending_bits = active_evtchns(cpu, s, word_idx)) != 0) {
- int bit_idx = __ffs(pending_bits);
- int port = (word_idx * BITS_PER_LONG) + bit_idx;
- int irq = evtchn_to_irq[port];
+ do {
+ unsigned long bits;
+ int port, irq;
struct irq_desc *desc;
+ pending_bits = active_evtchns(cpu, s, word_idx);
+
+ bit_idx = (bit_idx + 1) % BITS_PER_LONG;
+ bits = MASK_LSBS(pending_bits, bit_idx);
+
+ /* If we masked out all events, move on. */
+ if (bits == 0) {
+ bit_idx = BITS_PER_LONG - 1;
+ break;
+ }
+
+ bit_idx = __ffs(bits);
+
+ /* Process port. */
+ port = (word_idx * BITS_PER_LONG) + bit_idx;
+ irq = evtchn_to_irq[port];
+
mask_evtchn(port);
clear_evtchn(port);
@@ -1075,7 +1116,24 @@ static void __xen_evtchn_do_upcall(void)
if (desc)
generic_handle_irq_desc(irq, desc);
}
- }
+
+ /*
+ * If this is the final port processed, we'll
+ * pick up here+1 next time.
+ */
+ last_word_idx = word_idx;
+ last_bit_idx = bit_idx;
+
+ } while (bit_idx != BITS_PER_LONG - 1);
+
+ pending_bits = active_evtchns(cpu, s, word_idx);
+
+ /*
+ * We handled all ports, so we can clear the
+ * selector bit.
+ */
+ if (pending_bits == 0)
+ pending_words &= ~(1UL << word_idx);
}
BUG_ON(!irqs_disabled());
--
1.5.6.5
next prev parent reply other threads:[~2011-03-03 17:10 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-03 2:25 unfair servicing of DomU vbd requests James Harper
2011-03-03 7:29 ` Keir Fraser
2011-03-03 8:22 ` Ian Campbell
2011-03-03 8:28 ` James Harper
2011-03-03 8:30 ` Keir Fraser
2011-03-03 17:09 ` [GIT/PATCH 0/5] " Ian Campbell
2011-03-03 17:10 ` Ian Campbell [this message]
2011-03-03 17:10 ` [PATCH 2/5] xen: events: Make last processed event channel a per-cpu variable Ian Campbell
2011-03-09 20:32 ` Konrad Rzeszutek Wilk
2011-03-09 20:40 ` Ian Campbell
2011-03-09 20:47 ` Jeremy Fitzhardinge
2011-03-10 8:30 ` Ian Campbell
2011-03-11 17:46 ` Jeremy Fitzhardinge
2011-03-03 17:10 ` [PATCH 3/5] xen: events: Clean up round-robin evtchn scan Ian Campbell
2011-03-03 17:10 ` [PATCH 4/5] xen: events: Make round-robin scan fairer by snapshotting each l2 word Ian Campbell
2011-03-03 17:10 ` [PATCH 5/5] xen: events: Remove redundant clear of l2i at end of round-robin loop Ian Campbell
2011-03-04 8:40 ` [GIT/PATCH 0/5] Re: unfair servicing of DomU vbd requests John Weekes
2011-03-04 9:15 ` Ian Campbell
2011-03-07 19:33 ` John Weekes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1299172215-29470-1-git-send-email-ian.campbell@citrix.com \
--to=ian.campbell@citrix.com \
--cc=ian.campbell@xensource.com \
--cc=james.harper@bendigoit.com.au \
--cc=jeremy@goop.org \
--cc=keir.xen@gmail.com \
--cc=konrad.wilk@oracle.com \
--cc=rixner@rice.edu \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).