* [RFC v5 0/5] HPET fix interrupt logic
@ 2014-03-05 15:43 Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 1/5] x86/hpet: Pre cleanup Andrew Cooper
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Keir Fraser, Jan Beulich, Tim Deegan
This is v5 of the HPET series. It comes with a folded bugfix which prevents
the use of the legacy channel spinlock before it is initalised, and
substantial reworking of the hpet selection logic in the case of no channels
being free. This avoids a possiblity of sleeping forever when using certain
cpu-idle routines. As a result, Reviewed-by tags for the main patch have been
dropped.
Patch 1 is some pre cleanup which was brought forward from the main patch.
Patch 2 is the main chunk of work.
Patch 3 is some post cleanup which could be deferred.
Patches 4 and 5 are just debugging code.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
--
1.7.10.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v5 1/5] x86/hpet: Pre cleanup
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
@ 2014-03-05 15:43 ` Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Frediano Ziglio, Keir Fraser, Jan Beulich
These changes are ones which are able to be pulled out of the subsequent
patch, to make it clearer to understand and review.
They are all misc fixes with negligible functional changes.
* Rename hpet_next_event -> hpet_set_counter and convert it to take an
hpet_event_channel pointer rather than a timer index.
* Rename reprogram_hpet_evt_channel -> hpet_program_time
* Move the position of setting up HPET_EVT_LEGACY in hpet_broadcast_init() It
didn't need to be there.
* Avoid leaking ch->cpumask on error path in hpet_fsb_cap_lookup()
Contains a folded half bugfix from Frediano:
Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
xen/arch/x86/hpet.c | 33 +++++++++++++++++++++------------
1 file changed, 21 insertions(+), 12 deletions(-)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 3a4f7e8..d7bc29f 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -94,7 +94,12 @@ static inline unsigned long ns2ticks(unsigned long nsec, int shift,
return (unsigned long) tmp;
}
-static int hpet_next_event(unsigned long delta, int timer)
+/*
+ * Program an HPET channels counter relative to now. 'delta' is specified in
+ * ticks, and should be calculated with ns2ticks(). The channel lock should
+ * be taken and interrupts must be disabled.
+ */
+static int hpet_set_counter(struct hpet_event_channel *ch, unsigned long delta)
{
uint32_t cnt, cmp;
unsigned long flags;
@@ -102,7 +107,7 @@ static int hpet_next_event(unsigned long delta, int timer)
local_irq_save(flags);
cnt = hpet_read32(HPET_COUNTER);
cmp = cnt + delta;
- hpet_write32(cmp, HPET_Tn_CMP(timer));
+ hpet_write32(cmp, HPET_Tn_CMP(ch->idx));
cmp = hpet_read32(HPET_COUNTER);
local_irq_restore(flags);
@@ -110,9 +115,12 @@ static int hpet_next_event(unsigned long delta, int timer)
return ((cmp + 2 - cnt) > delta) ? -ETIME : 0;
}
-static int reprogram_hpet_evt_channel(
- struct hpet_event_channel *ch,
- s_time_t expire, s_time_t now, int force)
+/*
+ * Set the time at which an HPET channel should fire. The channel lock should
+ * be held.
+ */
+static int hpet_program_time(struct hpet_event_channel *ch,
+ s_time_t expire, s_time_t now, int force)
{
int64_t delta;
int ret;
@@ -143,11 +151,11 @@ static int reprogram_hpet_evt_channel(
delta = max_t(int64_t, delta, MIN_DELTA_NS);
delta = ns2ticks(delta, ch->shift, ch->mult);
- ret = hpet_next_event(delta, ch->idx);
+ ret = hpet_set_counter(ch, delta);
while ( ret && force )
{
delta += delta;
- ret = hpet_next_event(delta, ch->idx);
+ ret = hpet_set_counter(ch, delta);
}
return ret;
@@ -209,7 +217,7 @@ again:
spin_lock_irqsave(&ch->lock, flags);
if ( next_event < ch->next_event &&
- reprogram_hpet_evt_channel(ch, next_event, now, 0) )
+ hpet_program_time(ch, next_event, now, 0) )
goto again;
spin_unlock_irqrestore(&ch->lock, flags);
@@ -428,6 +436,8 @@ static void __init hpet_fsb_cap_lookup(void)
if ( hpet_assign_irq(ch) == 0 )
num_hpets_used++;
+ else
+ free_cpumask_var(ch->cpumask);
}
printk(XENLOG_INFO "HPET: %u timers usable for broadcast (%u total)\n",
@@ -583,6 +593,8 @@ void __init hpet_broadcast_init(void)
cfg |= HPET_CFG_LEGACY;
n = 1;
+ hpet_events->flags = HPET_EVT_LEGACY;
+
if ( !force_hpet_broadcast )
pv_rtc_handler = handle_rtc_once;
}
@@ -615,9 +627,6 @@ void __init hpet_broadcast_init(void)
hpet_events[i].msi.msi_attrib.maskbit = 1;
hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
}
-
- if ( !num_hpets_used )
- hpet_events->flags = HPET_EVT_LEGACY;
}
void hpet_broadcast_resume(void)
@@ -716,7 +725,7 @@ void hpet_broadcast_enter(void)
spin_lock(&ch->lock);
/* reprogram if current cpu expire time is nearer */
if ( per_cpu(timer_deadline, cpu) < ch->next_event )
- reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
+ hpet_program_time(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
spin_unlock(&ch->lock);
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 1/5] x86/hpet: Pre cleanup Andrew Cooper
@ 2014-03-05 15:43 ` Andrew Cooper
2014-03-06 14:11 ` Tim Deegan
` (2 more replies)
2014-03-05 15:43 ` [PATCH v5 3/5] x86/hpet: Post cleanup Andrew Cooper
` (2 subsequent siblings)
4 siblings, 3 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Frediano Ziglio, Keir Fraser, Jan Beulich,
Tim Deegan
This involves rewriting most of the MSI related HPET code, and as a result
this patch looks very complicated. It is probably best viewed as an end
result, with the following notes explaining what is going on.
The new logic is as follows:
* A single high priority vector is allocated and uses on all cpus.
* Reliance on the irq infrastructure is completely removed.
* Tracking of free hpet channels has changed. It is now an individual
bitmap, and allocation is based on winning a test_and_clear_bit()
operation.
* There is a notion of strict ownership of hpet channels.
** A cpu which owns an HPET channel can program it for a desired deadline.
** A cpu which can't find a free HPET channel will have to share.
** If an HPET firing at an appropriate time can be found (up to 20us late), a
CPU will simply request to be woken up with that HPET.
** Failing finding an appropriate timed HPET, a CPU shall find the soonest
late HPET and program it earlier.
** Failing any late HPETs, a CPU shall wake up with the latest early HPET it
can find.
** Failing all else, a CPU shall retry to find a free HPET. This guarantees
that a CPU will never leave hpet_broadcast_enter() without arranging an
interrupt.
* Some functions have been renamed to be more descriptive. Some functions
have parameters changed to be more consistent.
Contains a folded half bugfix from Frediano:
Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
xen/arch/x86/hpet.c | 622 ++++++++++++++++++++++++---------------------------
1 file changed, 294 insertions(+), 328 deletions(-)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index d7bc29f..441a3cf 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -4,26 +4,21 @@
* HPET management.
*/
-#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/init.h>
+#include <xen/cpuidle.h>
#include <xen/errno.h>
-#include <xen/time.h>
-#include <xen/timer.h>
-#include <xen/smp.h>
#include <xen/softirq.h>
-#include <xen/irq.h>
-#include <xen/numa.h>
+
+#include <mach_apic.h>
+
#include <asm/fixmap.h>
#include <asm/div64.h>
#include <asm/hpet.h>
-#include <asm/msi.h>
-#include <mach_apic.h>
-#include <xen/cpuidle.h>
#define MAX_DELTA_NS MILLISECS(10*1000)
#define MIN_DELTA_NS MICROSECS(20)
-#define HPET_EVT_USED_BIT 0
-#define HPET_EVT_USED (1 << HPET_EVT_USED_BIT)
#define HPET_EVT_DISABLE_BIT 1
#define HPET_EVT_DISABLE (1 << HPET_EVT_DISABLE_BIT)
#define HPET_EVT_LEGACY_BIT 2
@@ -36,8 +31,6 @@ struct hpet_event_channel
s_time_t next_event;
cpumask_var_t cpumask;
spinlock_t lock;
- void (*event_handler)(struct hpet_event_channel *);
-
unsigned int idx; /* physical channel idx */
unsigned int cpu; /* msi target */
struct msi_desc msi;/* msi state */
@@ -48,8 +41,20 @@ static struct hpet_event_channel *__read_mostly hpet_events;
/* msi hpet channels used for broadcast */
static unsigned int __read_mostly num_hpets_used;
-DEFINE_PER_CPU(struct hpet_event_channel *, cpu_bc_channel);
+/* High-priority vector for HPET interrupts */
+static u8 __read_mostly hpet_vector;
+/*
+ * HPET channel used for idling. Either the HPET channel this cpu owns
+ * (indicated by channel->cpu pointing back), or the HPET channel belonging to
+ * another cpu with which we have requested to be woken.
+ */
+static DEFINE_PER_CPU(struct hpet_event_channel *, hpet_channel);
+
+/* Bitmap of currently-free HPET channels. */
+static uint32_t free_channels;
+
+/* Data from the HPET ACPI table */
unsigned long __initdata hpet_address;
u8 __initdata hpet_blockid;
@@ -161,89 +166,43 @@ static int hpet_program_time(struct hpet_event_channel *ch,
return ret;
}
-static void evt_do_broadcast(cpumask_t *mask)
+/* Wake up all cpus in the channel mask. Lock should be held. */
+static void hpet_wake_cpus(struct hpet_event_channel *ch)
{
- unsigned int cpu = smp_processor_id();
-
- if ( cpumask_test_and_clear_cpu(cpu, mask) )
- raise_softirq(TIMER_SOFTIRQ);
-
- cpuidle_wakeup_mwait(mask);
-
- if ( !cpumask_empty(mask) )
- cpumask_raise_softirq(mask, TIMER_SOFTIRQ);
+ cpuidle_wakeup_mwait(ch->cpumask);
+ cpumask_raise_softirq(ch->cpumask, TIMER_SOFTIRQ);
}
-static void handle_hpet_broadcast(struct hpet_event_channel *ch)
+/* HPET interrupt handler. Wake all requested cpus. Lock should be held. */
+static void hpet_interrupt_handler(struct hpet_event_channel *ch)
{
- cpumask_t mask;
- s_time_t now, next_event;
- unsigned int cpu;
- unsigned long flags;
-
- spin_lock_irqsave(&ch->lock, flags);
-
-again:
- ch->next_event = STIME_MAX;
-
- spin_unlock_irqrestore(&ch->lock, flags);
-
- next_event = STIME_MAX;
- cpumask_clear(&mask);
- now = NOW();
-
- /* find all expired events */
- for_each_cpu(cpu, ch->cpumask)
- {
- s_time_t deadline;
-
- rmb();
- deadline = per_cpu(timer_deadline, cpu);
- rmb();
- if ( !cpumask_test_cpu(cpu, ch->cpumask) )
- continue;
-
- if ( deadline <= now )
- cpumask_set_cpu(cpu, &mask);
- else if ( deadline < next_event )
- next_event = deadline;
- }
-
- /* wakeup the cpus which have an expired event. */
- evt_do_broadcast(&mask);
-
- if ( next_event != STIME_MAX )
- {
- spin_lock_irqsave(&ch->lock, flags);
-
- if ( next_event < ch->next_event &&
- hpet_program_time(ch, next_event, now, 0) )
- goto again;
-
- spin_unlock_irqrestore(&ch->lock, flags);
- }
+ hpet_wake_cpus(ch);
+ raise_softirq(TIMER_SOFTIRQ);
}
-static void hpet_interrupt_handler(int irq, void *data,
- struct cpu_user_regs *regs)
+/* HPET interrupt entry. This is set up as a high priority vector. */
+static void do_hpet_irq(struct cpu_user_regs *regs)
{
- struct hpet_event_channel *ch = (struct hpet_event_channel *)data;
-
- this_cpu(irq_count)--;
+ struct hpet_event_channel *ch = this_cpu(hpet_channel);
- if ( !ch->event_handler )
+ if ( ch )
{
- printk(XENLOG_WARNING "Spurious HPET timer interrupt on HPET timer %d\n", ch->idx);
- return;
+ spin_lock(&ch->lock);
+ if ( ch->cpu == smp_processor_id() )
+ {
+ ch->next_event = 0;
+ hpet_interrupt_handler(ch);
+ }
+ spin_unlock(&ch->lock);
}
- ch->event_handler(ch);
+ ack_APIC_irq();
}
-static void hpet_msi_unmask(struct irq_desc *desc)
+/* Unmask an HPET MSI channel. Lock should be held */
+static void hpet_msi_unmask(struct hpet_event_channel *ch)
{
u32 cfg;
- struct hpet_event_channel *ch = desc->action->dev_id;
cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
cfg |= HPET_TN_ENABLE;
@@ -251,10 +210,10 @@ static void hpet_msi_unmask(struct irq_desc *desc)
ch->msi.msi_attrib.masked = 0;
}
-static void hpet_msi_mask(struct irq_desc *desc)
+/* Mask an HPET MSI channel. Lock should be held */
+static void hpet_msi_mask(struct hpet_event_channel *ch)
{
u32 cfg;
- struct hpet_event_channel *ch = desc->action->dev_id;
cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
cfg &= ~HPET_TN_ENABLE;
@@ -262,92 +221,36 @@ static void hpet_msi_mask(struct irq_desc *desc)
ch->msi.msi_attrib.masked = 1;
}
-static int hpet_msi_write(struct hpet_event_channel *ch, struct msi_msg *msg)
+/*
+ * Set up the MSI for an HPET channel to point at the allocated cpu, including
+ * interrupt remapping entries when appropriate. The channel lock is expected
+ * to be held, and the MSI must currently be masked.
+ */
+static int hpet_setup_msi(struct hpet_event_channel *ch)
{
- ch->msi.msg = *msg;
+ ASSERT(ch->cpu != -1);
+ ASSERT(ch->msi.msi_attrib.masked == 1);
+
+ msi_compose_msg(hpet_vector, cpumask_of(ch->cpu), &ch->msi.msg);
if ( iommu_intremap )
{
- int rc = iommu_update_ire_from_msi(&ch->msi, msg);
+ int rc = iommu_update_ire_from_msi(&ch->msi, &ch->msi.msg);
if ( rc )
return rc;
}
- hpet_write32(msg->data, HPET_Tn_ROUTE(ch->idx));
- hpet_write32(msg->address_lo, HPET_Tn_ROUTE(ch->idx) + 4);
-
- return 0;
-}
-
-static void __maybe_unused
-hpet_msi_read(struct hpet_event_channel *ch, struct msi_msg *msg)
-{
- msg->data = hpet_read32(HPET_Tn_ROUTE(ch->idx));
- msg->address_lo = hpet_read32(HPET_Tn_ROUTE(ch->idx) + 4);
- msg->address_hi = MSI_ADDR_BASE_HI;
- if ( iommu_intremap )
- iommu_read_msi_from_ire(&ch->msi, msg);
-}
+ hpet_write32(ch->msi.msg.data, HPET_Tn_ROUTE(ch->idx));
+ hpet_write32(ch->msi.msg.address_lo, HPET_Tn_ROUTE(ch->idx) + 4);
-static unsigned int hpet_msi_startup(struct irq_desc *desc)
-{
- hpet_msi_unmask(desc);
return 0;
}
-#define hpet_msi_shutdown hpet_msi_mask
-
-static void hpet_msi_ack(struct irq_desc *desc)
-{
- irq_complete_move(desc);
- move_native_irq(desc);
- ack_APIC_irq();
-}
-
-static void hpet_msi_set_affinity(struct irq_desc *desc, const cpumask_t *mask)
-{
- struct hpet_event_channel *ch = desc->action->dev_id;
- struct msi_msg msg = ch->msi.msg;
-
- msg.dest32 = set_desc_affinity(desc, mask);
- if ( msg.dest32 == BAD_APICID )
- return;
-
- msg.data &= ~MSI_DATA_VECTOR_MASK;
- msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
- msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
- msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
- if ( msg.data != ch->msi.msg.data || msg.dest32 != ch->msi.msg.dest32 )
- hpet_msi_write(ch, &msg);
-}
-
-/*
- * IRQ Chip for MSI HPET Devices,
- */
-static hw_irq_controller hpet_msi_type = {
- .typename = "HPET-MSI",
- .startup = hpet_msi_startup,
- .shutdown = hpet_msi_shutdown,
- .enable = hpet_msi_unmask,
- .disable = hpet_msi_mask,
- .ack = hpet_msi_ack,
- .set_affinity = hpet_msi_set_affinity,
-};
-
-static int __hpet_setup_msi_irq(struct irq_desc *desc)
-{
- struct msi_msg msg;
-
- msi_compose_msg(desc->arch.vector, desc->arch.cpu_mask, &msg);
- return hpet_msi_write(desc->action->dev_id, &msg);
-}
-
-static int __init hpet_setup_msi_irq(struct hpet_event_channel *ch)
+static int __init hpet_init_msi(struct hpet_event_channel *ch)
{
int ret;
u32 cfg = hpet_read32(HPET_Tn_CFG(ch->idx));
- irq_desc_t *desc = irq_to_desc(ch->msi.irq);
if ( iommu_intremap )
{
@@ -358,41 +261,31 @@ static int __init hpet_setup_msi_irq(struct hpet_event_channel *ch)
}
/* set HPET Tn as oneshot */
- cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
+ cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC | HPET_TN_ENABLE);
cfg |= HPET_TN_FSB | HPET_TN_32BIT;
hpet_write32(cfg, HPET_Tn_CFG(ch->idx));
-
- desc->handler = &hpet_msi_type;
- ret = request_irq(ch->msi.irq, hpet_interrupt_handler, "HPET", ch);
- if ( ret >= 0 )
- ret = __hpet_setup_msi_irq(desc);
- if ( ret < 0 )
- {
- if ( iommu_intremap )
- iommu_update_ire_from_msi(&ch->msi, NULL);
- return ret;
- }
-
- desc->msi_desc = &ch->msi;
+ ch->msi.msi_attrib.masked = 1;
return 0;
}
-static int __init hpet_assign_irq(struct hpet_event_channel *ch)
+static void __init hpet_init_channel(struct hpet_event_channel *ch)
{
- int irq;
-
- if ( (irq = create_irq(NUMA_NO_NODE)) < 0 )
- return irq;
+ u64 hpet_rate = hpet_setup();
- ch->msi.irq = irq;
- if ( hpet_setup_msi_irq(ch) )
- {
- destroy_irq(irq);
- return -EINVAL;
- }
+ /*
+ * The period is a femto seconds value. We need to calculate the scaled
+ * math multiplication factor for nanosecond to hpet tick conversion.
+ */
+ ch->mult = div_sc((unsigned long)hpet_rate,
+ 1000000000ul, 32);
+ ch->shift = 32;
+ ch->next_event = STIME_MAX;
+ spin_lock_init(&ch->lock);
- return 0;
+ ch->msi.irq = -1;
+ ch->msi.msi_attrib.maskbit = 1;
+ ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
}
static void __init hpet_fsb_cap_lookup(void)
@@ -412,6 +305,8 @@ static void __init hpet_fsb_cap_lookup(void)
if ( !hpet_events )
return;
+ alloc_direct_apic_vector(&hpet_vector, do_hpet_irq);
+
for ( i = 0; i < num_chs && num_hpets_used < nr_cpu_ids; i++ )
{
struct hpet_event_channel *ch = &hpet_events[num_hpets_used];
@@ -431,10 +326,12 @@ static void __init hpet_fsb_cap_lookup(void)
break;
}
+ hpet_init_channel(ch);
+
ch->flags = 0;
ch->idx = i;
- if ( hpet_assign_irq(ch) == 0 )
+ if ( hpet_init_msi(ch) == 0 )
num_hpets_used++;
else
free_cpumask_var(ch->cpumask);
@@ -444,102 +341,28 @@ static void __init hpet_fsb_cap_lookup(void)
num_hpets_used, num_chs);
}
-static struct hpet_event_channel *hpet_get_channel(unsigned int cpu)
+/*
+ * Search for, and allocate, a free HPET channel. Returns a pointer to the
+ * channel, or NULL in the case that none were free. The caller is
+ * responsible for returning the channel to the free pool.
+ */
+static struct hpet_event_channel *hpet_get_free_channel(void)
{
- static unsigned int next_channel;
- unsigned int i, next;
- struct hpet_event_channel *ch;
+ unsigned ch, tries;
- if ( num_hpets_used == 0 )
- return hpet_events;
-
- if ( num_hpets_used >= nr_cpu_ids )
- return &hpet_events[cpu];
-
- do {
- next = next_channel;
- if ( (i = next + 1) == num_hpets_used )
- i = 0;
- } while ( cmpxchg(&next_channel, next, i) != next );
-
- /* try unused channel first */
- for ( i = next; i < next + num_hpets_used; i++ )
+ for ( tries = num_hpets_used; tries; --tries )
{
- ch = &hpet_events[i % num_hpets_used];
- if ( !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
- {
- ch->cpu = cpu;
- return ch;
- }
- }
-
- /* share a in-use channel */
- ch = &hpet_events[next];
- if ( !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
- ch->cpu = cpu;
-
- return ch;
-}
-
-static void set_channel_irq_affinity(struct hpet_event_channel *ch)
-{
- struct irq_desc *desc = irq_to_desc(ch->msi.irq);
-
- ASSERT(!local_irq_is_enabled());
- spin_lock(&desc->lock);
- hpet_msi_mask(desc);
- hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
- hpet_msi_unmask(desc);
- spin_unlock(&desc->lock);
-
- spin_unlock(&ch->lock);
-
- /* We may have missed an interrupt due to the temporary masking. */
- if ( ch->event_handler && ch->next_event < NOW() )
- ch->event_handler(ch);
-}
-
-static void hpet_attach_channel(unsigned int cpu,
- struct hpet_event_channel *ch)
-{
- ASSERT(!local_irq_is_enabled());
- spin_lock(&ch->lock);
-
- per_cpu(cpu_bc_channel, cpu) = ch;
-
- /* try to be the channel owner again while holding the lock */
- if ( !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
- ch->cpu = cpu;
-
- if ( ch->cpu != cpu )
- spin_unlock(&ch->lock);
- else
- set_channel_irq_affinity(ch);
-}
-
-static void hpet_detach_channel(unsigned int cpu,
- struct hpet_event_channel *ch)
-{
- spin_lock_irq(&ch->lock);
-
- ASSERT(ch == per_cpu(cpu_bc_channel, cpu));
+ if ( (ch = ffs(free_channels)) == 0 )
+ break;
- per_cpu(cpu_bc_channel, cpu) = NULL;
+ --ch;
+ ASSERT(ch < num_hpets_used);
- if ( cpu != ch->cpu )
- spin_unlock_irq(&ch->lock);
- else if ( cpumask_empty(ch->cpumask) )
- {
- ch->cpu = -1;
- clear_bit(HPET_EVT_USED_BIT, &ch->flags);
- spin_unlock_irq(&ch->lock);
- }
- else
- {
- ch->cpu = cpumask_first(ch->cpumask);
- set_channel_irq_affinity(ch);
- local_irq_enable();
+ if ( test_and_clear_bit(ch, &free_channels) )
+ return &hpet_events[ch];
}
+
+ return NULL;
}
#include <asm/mc146818rtc.h>
@@ -563,7 +386,6 @@ void __init hpet_broadcast_init(void)
{
u64 hpet_rate = hpet_setup();
u32 hpet_id, cfg;
- unsigned int i, n;
if ( hpet_rate == 0 || hpet_broadcast_is_available() )
return;
@@ -575,7 +397,7 @@ void __init hpet_broadcast_init(void)
{
/* Stop HPET legacy interrupts */
cfg &= ~HPET_CFG_LEGACY;
- n = num_hpets_used;
+ free_channels = (u32)~0 >> (32 - num_hpets_used);
}
else
{
@@ -587,11 +409,11 @@ void __init hpet_broadcast_init(void)
hpet_events = xzalloc(struct hpet_event_channel);
if ( !hpet_events || !zalloc_cpumask_var(&hpet_events->cpumask) )
return;
- hpet_events->msi.irq = -1;
+
+ hpet_init_channel(hpet_events);
/* Start HPET legacy interrupts */
cfg |= HPET_CFG_LEGACY;
- n = 1;
hpet_events->flags = HPET_EVT_LEGACY;
@@ -601,31 +423,13 @@ void __init hpet_broadcast_init(void)
hpet_write32(cfg, HPET_CFG);
- for ( i = 0; i < n; i++ )
+ if ( cfg & HPET_CFG_LEGACY )
{
- if ( i == 0 && (cfg & HPET_CFG_LEGACY) )
- {
- /* set HPET T0 as oneshot */
- cfg = hpet_read32(HPET_Tn_CFG(0));
- cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
- cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
- hpet_write32(cfg, HPET_Tn_CFG(0));
- }
-
- /*
- * The period is a femto seconds value. We need to calculate the scaled
- * math multiplication factor for nanosecond to hpet tick conversion.
- */
- hpet_events[i].mult = div_sc((unsigned long)hpet_rate,
- 1000000000ul, 32);
- hpet_events[i].shift = 32;
- hpet_events[i].next_event = STIME_MAX;
- spin_lock_init(&hpet_events[i].lock);
- wmb();
- hpet_events[i].event_handler = handle_hpet_broadcast;
-
- hpet_events[i].msi.msi_attrib.maskbit = 1;
- hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
+ /* set HPET T0 as oneshot */
+ cfg = hpet_read32(HPET_Tn_CFG(0));
+ cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
+ cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
+ hpet_write32(cfg, HPET_Tn_CFG(0));
}
}
@@ -660,15 +464,24 @@ void hpet_broadcast_resume(void)
for ( i = 0; i < n; i++ )
{
- if ( hpet_events[i].msi.irq >= 0 )
- __hpet_setup_msi_irq(irq_to_desc(hpet_events[i].msi.irq));
-
/* set HPET Tn as oneshot */
cfg = hpet_read32(HPET_Tn_CFG(hpet_events[i].idx));
cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
- cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
- if ( !(hpet_events[i].flags & HPET_EVT_LEGACY) )
+ cfg |= HPET_TN_32BIT;
+
+ /*
+ * Legacy HPET channel enabled here. MSI channels enabled in
+ * hpet_broadcast_init() when claimed by a cpu.
+ */
+ if ( hpet_events[i].flags & HPET_EVT_LEGACY )
+ cfg |= HPET_TN_ENABLE;
+ else
+ {
+ cfg &= ~HPET_TN_ENABLE;
cfg |= HPET_TN_FSB;
+ hpet_events[i].msi.msi_attrib.masked = 1;
+ }
+
hpet_write32(cfg, HPET_Tn_CFG(hpet_events[i].idx));
hpet_events[i].next_event = STIME_MAX;
@@ -705,50 +518,196 @@ void hpet_disable_legacy_broadcast(void)
void hpet_broadcast_enter(void)
{
unsigned int cpu = smp_processor_id();
- struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+ struct hpet_event_channel *ch = this_cpu(hpet_channel);
+ s_time_t deadline = this_cpu(timer_deadline);
+
+ ASSERT(!local_irq_is_enabled());
+ ASSERT(ch == NULL);
- if ( per_cpu(timer_deadline, cpu) == 0 )
+ if ( deadline == 0 )
return;
- if ( !ch )
- ch = hpet_get_channel(cpu);
+ /* If using HPET in legacy timer mode */
+ if ( num_hpets_used == 0 )
+ {
+ spin_lock(&hpet_events->lock);
- ASSERT(!local_irq_is_enabled());
+ cpumask_set_cpu(cpu, hpet_events->cpumask);
+ if ( deadline < hpet_events->next_event )
+ hpet_program_time(hpet_events, deadline, NOW(), 1);
+
+ spin_unlock(&hpet_events->lock);
+ return;
+ }
+
+retry_free_channel:
+ ch = hpet_get_free_channel();
+
+ if ( ch )
+ {
+ spin_lock(&ch->lock);
+
+ /* This really should be an MSI channel by this point */
+ ASSERT(!(ch->flags & HPET_EVT_LEGACY));
+
+ hpet_msi_mask(ch);
+
+ ch->cpu = cpu;
+ this_cpu(hpet_channel) = ch;
+ cpumask_set_cpu(cpu, ch->cpumask);
+
+ hpet_setup_msi(ch);
+ hpet_program_time(ch, deadline, NOW(), 1);
+ hpet_msi_unmask(ch);
+
+ spin_unlock(&ch->lock);
+ }
+ else
+ {
+ s_time_t best_early_deadline = 0, best_late_deadline = STIME_MAX;
+ unsigned int i, best_early_idx = -1, best_late_idx = -1;
+
+ for ( i = 0; i < num_hpets_used; ++i )
+ {
+ ch = &hpet_events[i];
+ spin_lock(&ch->lock);
+
+ if ( ch->cpu == -1 )
+ goto continue_search;
+
+ /* This channel is going to expire early */
+ if ( ch->next_event < deadline )
+ {
+ if ( ch->next_event > best_early_deadline )
+ {
+ best_early_idx = i;
+ best_early_deadline = ch->next_event;
+ }
+ goto continue_search;
+ }
+
+ /* We can deal with being woken up 20us late */
+ if ( ch->next_event <= deadline + MICROSECS(20) )
+ break;
- if ( !(ch->flags & HPET_EVT_LEGACY) )
- hpet_attach_channel(cpu, ch);
+ /* Otherwise record the best late channel to program forwards */
+ if ( ch->next_event <= best_late_deadline )
+ {
+ best_late_idx = i;
+ best_late_deadline = ch->next_event;
+ }
+
+ continue_search:
+ spin_unlock(&ch->lock);
+ ch = NULL;
+ }
+
+ if ( ch )
+ {
+ /* Found HPET with an appropriate time. Request to be woken up */
+ cpumask_set_cpu(cpu, ch->cpumask);
+ this_cpu(hpet_channel) = ch;
+ spin_unlock(&ch->lock);
+ goto done_searching;
+ }
+
+ /* Try and program the best late channel forwards a bit */
+ if ( best_late_deadline < STIME_MAX && best_late_idx != -1 )
+ {
+ ch = &hpet_events[best_late_idx];
+ spin_lock(&ch->lock);
+
+ /* If this is still the same channel, good */
+ if ( ch->next_event == best_late_deadline )
+ {
+ cpumask_set_cpu(cpu, ch->cpumask);
+ hpet_program_time(ch, deadline, NOW(), 1);
+ spin_unlock(&ch->lock);
+ goto done_searching;
+ }
+ /* else it has fired and changed ownership. */
+ else
+ {
+ spin_unlock(&ch->lock);
+ goto retry_free_channel;
+ }
+ }
+
+ /* Try to piggyback on an early channel in the hope that when we
+ wake back up, our fortunes will improve. */
+ if ( best_early_deadline > 0 && best_early_idx != -1 )
+ {
+ ch = &hpet_events[best_early_idx];
+ spin_lock(&ch->lock);
+
+ /* If this is still the same channel, good */
+ if ( ch->next_event == best_early_deadline )
+ {
+ cpumask_set_cpu(cpu, ch->cpumask);
+ spin_unlock(&ch->lock);
+ goto done_searching;
+ }
+ /* else it has fired and changed ownership. */
+ else
+ {
+ spin_unlock(&ch->lock);
+ goto retry_free_channel;
+ }
+ }
+
+ /* All else has failed, and we have wasted some time searching.
+ * See whether another channel has become free. */
+ goto retry_free_channel;
+ }
+
+done_searching:
/* Disable LAPIC timer interrupts. */
disable_APIC_timer();
- cpumask_set_cpu(cpu, ch->cpumask);
-
- spin_lock(&ch->lock);
- /* reprogram if current cpu expire time is nearer */
- if ( per_cpu(timer_deadline, cpu) < ch->next_event )
- hpet_program_time(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
- spin_unlock(&ch->lock);
}
void hpet_broadcast_exit(void)
{
unsigned int cpu = smp_processor_id();
- struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+ struct hpet_event_channel *ch = this_cpu(hpet_channel);
+
+ ASSERT(local_irq_is_enabled());
+
+ if ( this_cpu(timer_deadline) == 0 )
+ return;
- if ( per_cpu(timer_deadline, cpu) == 0 )
+ /* If using HPET in legacy timer mode */
+ if ( num_hpets_used == 0 )
+ {
+ /* This is safe without the spinlock, and will reduce contention. */
+ cpumask_clear_cpu(cpu, hpet_events->cpumask);
return;
+ }
if ( !ch )
- ch = hpet_get_channel(cpu);
+ return;
- /* Reprogram the deadline; trigger timer work now if it has passed. */
- enable_APIC_timer();
- if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
- raise_softirq(TIMER_SOFTIRQ);
+ spin_lock_irq(&ch->lock);
cpumask_clear_cpu(cpu, ch->cpumask);
- if ( !(ch->flags & HPET_EVT_LEGACY) )
- hpet_detach_channel(cpu, ch);
+ /* If we own the channel, detach it */
+ if ( ch->cpu == cpu )
+ {
+ hpet_msi_mask(ch);
+ hpet_wake_cpus(ch);
+ ch->cpu = -1;
+ set_bit(ch->idx, &free_channels);
+ }
+
+ this_cpu(hpet_channel) = NULL;
+
+ spin_unlock_irq(&ch->lock);
+
+ /* Reprogram the deadline; trigger timer work now if it has passed. */
+ enable_APIC_timer();
+ if ( !reprogram_timer(this_cpu(timer_deadline)) )
+ raise_softirq(TIMER_SOFTIRQ);
}
int hpet_broadcast_is_available(void)
@@ -765,7 +724,14 @@ int hpet_legacy_irq_tick(void)
(hpet_events->flags & (HPET_EVT_DISABLE|HPET_EVT_LEGACY)) !=
HPET_EVT_LEGACY )
return 0;
- hpet_events->event_handler(hpet_events);
+
+ spin_lock_irq(&hpet_events->lock);
+
+ hpet_interrupt_handler(hpet_events);
+ hpet_events->next_event = STIME_MAX;
+
+ spin_unlock_irq(&hpet_events->lock);
+
return 1;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 3/5] x86/hpet: Post cleanup
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 1/5] x86/hpet: Pre cleanup Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
@ 2014-03-05 15:43 ` Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 4/5] x86/hpet: Debug and verbose hpet logging Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 5/5] x86/hpet: debug keyhandlers Andrew Cooper
4 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Keir Fraser, Jan Beulich
These changes are ones which were able to be pulled out of the previous patch.
They are all misc cleanup without functional implications
* Shift HPET_EVT_* definitions up now that USED has moved out
* Shuffle struct hpet_event_channel
** Reflow horizontally and comment current use
** Promote 'shift' to unsigned. It is the constant 32 but can be more easily
optimised.
** Move 'flags' up to fill 4 byte hole
** Move 'cpumask' and 'lock' into second cache line as they are diried from
other cpus
* The new locking requirements guarentee that interrupts are disabled in
hpet_set_counter. Leave an ASSERT() just in case.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
xen/arch/x86/hpet.c | 27 +++++++++++++--------------
1 file changed, 13 insertions(+), 14 deletions(-)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 441a3cf..b441eb2 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -19,22 +19,22 @@
#define MAX_DELTA_NS MILLISECS(10*1000)
#define MIN_DELTA_NS MICROSECS(20)
-#define HPET_EVT_DISABLE_BIT 1
+#define HPET_EVT_DISABLE_BIT 0
#define HPET_EVT_DISABLE (1 << HPET_EVT_DISABLE_BIT)
-#define HPET_EVT_LEGACY_BIT 2
+#define HPET_EVT_LEGACY_BIT 1
#define HPET_EVT_LEGACY (1 << HPET_EVT_LEGACY_BIT)
struct hpet_event_channel
{
- unsigned long mult;
- int shift;
- s_time_t next_event;
- cpumask_var_t cpumask;
- spinlock_t lock;
- unsigned int idx; /* physical channel idx */
- unsigned int cpu; /* msi target */
- struct msi_desc msi;/* msi state */
- unsigned int flags; /* HPET_EVT_x */
+ unsigned long mult; /* tick <-> time conversion */
+ unsigned int shift; /* tick <-> time conversion */
+ unsigned int flags; /* HPET_EVT_x */
+ s_time_t next_event; /* expected time of next interrupt */
+ unsigned int idx; /* HPET counter index */
+ unsigned int cpu; /* owner of channel (or -1) */
+ struct msi_desc msi; /* msi state */
+ cpumask_var_t cpumask; /* cpus wishing to be woken */
+ spinlock_t lock;
} __cacheline_aligned;
static struct hpet_event_channel *__read_mostly hpet_events;
@@ -107,14 +107,13 @@ static inline unsigned long ns2ticks(unsigned long nsec, int shift,
static int hpet_set_counter(struct hpet_event_channel *ch, unsigned long delta)
{
uint32_t cnt, cmp;
- unsigned long flags;
- local_irq_save(flags);
+ ASSERT(!local_irq_is_enabled());
+
cnt = hpet_read32(HPET_COUNTER);
cmp = cnt + delta;
hpet_write32(cmp, HPET_Tn_CMP(ch->idx));
cmp = hpet_read32(HPET_COUNTER);
- local_irq_restore(flags);
/* Are we within two ticks of the deadline passing? Then we may miss. */
return ((cmp + 2 - cnt) > delta) ? -ETIME : 0;
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 4/5] x86/hpet: Debug and verbose hpet logging
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
` (2 preceding siblings ...)
2014-03-05 15:43 ` [PATCH v5 3/5] x86/hpet: Post cleanup Andrew Cooper
@ 2014-03-05 15:43 ` Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 5/5] x86/hpet: debug keyhandlers Andrew Cooper
4 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Keir Fraser, Jan Beulich
This was for debugging purposes, but might perhaps be more useful generally.
I am happy to keep none, some or all of it, depending on how useful people
think it might be.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
xen/arch/x86/hpet.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index b441eb2..5f9599c 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -66,6 +66,36 @@ u8 __initdata hpet_blockid;
static bool_t __initdata force_hpet_broadcast;
boolean_param("hpetbroadcast", force_hpet_broadcast);
+static bool_t __read_mostly hpet_verbose;
+static bool_t __read_mostly hpet_debug;
+static void __init parse_hpet_param(char * s)
+{
+ char *ss;
+ int val;
+
+ do {
+ val = !!strncmp(s, "no-", 3);
+ if ( !val )
+ s += 3;
+
+ ss = strchr(s, ',');
+ if ( ss )
+ *ss = '\0';
+
+ if ( !strcmp(s, "verbose") )
+ hpet_verbose = val;
+ else if ( !strcmp(s, "debug") )
+ {
+ hpet_debug = val;
+ if ( val )
+ hpet_verbose = 1;
+ }
+
+ s = ss + 1;
+ } while ( ss );
+}
+custom_param("hpet", parse_hpet_param);
+
/*
* Calculate a multiplication factor for scaled math, which is used to convert
* nanoseconds based values to clock ticks:
@@ -99,6 +129,35 @@ static inline unsigned long ns2ticks(unsigned long nsec, int shift,
return (unsigned long) tmp;
}
+static void dump_hpet_timer(unsigned timer)
+{
+ u32 cfg = hpet_read32(HPET_Tn_CFG(timer));
+
+ printk(XENLOG_INFO "HPET: Timer %02u CFG: raw 0x%08"PRIx32
+ " Caps: %d %c%c", timer, cfg,
+ cfg & HPET_TN_64BIT_CAP ? 64 : 32,
+ cfg & HPET_TN_FSB_CAP ? 'M' : '-',
+ cfg & HPET_TN_PERIODIC_CAP ? 'P' : '-');
+
+ printk("\n Setup: ");
+
+ if ( (cfg & HPET_TN_FSB_CAP) && (cfg & HPET_TN_FSB) )
+ printk("FSB ");
+
+ if ( !(cfg & HPET_TN_FSB) )
+ printk("GSI %#x ",
+ (cfg & HPET_TN_ROUTE) >> HPET_TN_ROUTE_SHIFT);
+
+ if ( cfg & HPET_TN_32BIT )
+ printk("32bit ");
+
+ if ( cfg & HPET_TN_PERIODIC )
+ printk("Periodic ");
+
+ printk("%sabled ", cfg & HPET_TN_ENABLE ? "En" : "Dis");
+ printk("%s\n", cfg & HPET_TN_LEVEL ? "Level" : "Edge");
+}
+
/*
* Program an HPET channels counter relative to now. 'delta' is specified in
* ticks, and should be calculated with ns2ticks(). The channel lock should
@@ -743,7 +802,14 @@ u64 __init hpet_setup(void)
unsigned int last;
if ( hpet_rate )
+ {
+ if ( hpet_debug )
+ printk(XENLOG_DEBUG "HPET: Skipping re-setup\n");
return hpet_rate;
+ }
+
+ if ( hpet_debug )
+ printk(XENLOG_DEBUG "HPET: Setting up hpet data\n");
if ( hpet_address == 0 )
return 0;
@@ -757,6 +823,20 @@ u64 __init hpet_setup(void)
return 0;
}
+ if ( hpet_verbose )
+ {
+ printk(XENLOG_INFO "HPET: Vendor: %04"PRIx16", Rev: %u, %u timers\n",
+ hpet_id >> HPET_ID_VENDOR_SHIFT,
+ hpet_id & HPET_ID_REV,
+ ((hpet_id & HPET_ID_NUMBER) >> HPET_ID_NUMBER_SHIFT) + 1);
+ printk(XENLOG_INFO "HPET: Caps: ");
+ if ( hpet_id & HPET_ID_LEGSUP )
+ printk("Legacy ");
+ if ( hpet_id & HPET_ID_64BIT )
+ printk("64bit ");
+ printk("\n");
+ }
+
/* Check for sane period (100ps <= period <= 100ns). */
hpet_period = hpet_read32(HPET_PERIOD);
if ( (hpet_period > 100000000) || (hpet_period < 100000) )
@@ -814,6 +894,9 @@ void hpet_resume(u32 *boot_cfg)
cfg &= ~HPET_TN_RESERVED;
}
hpet_write32(cfg, HPET_Tn_CFG(i));
+
+ if ( hpet_verbose )
+ dump_hpet_timer(i);
}
cfg = hpet_read32(HPET_CFG);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 5/5] x86/hpet: debug keyhandlers
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
` (3 preceding siblings ...)
2014-03-05 15:43 ` [PATCH v5 4/5] x86/hpet: Debug and verbose hpet logging Andrew Cooper
@ 2014-03-05 15:43 ` Andrew Cooper
4 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2014-03-05 15:43 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper
Debug key for dumping HPET state.
This patch is not intended for committing.
---
xen/arch/x86/hpet.c | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 5f9599c..e8b35ca 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -9,6 +9,7 @@
#include <xen/cpuidle.h>
#include <xen/errno.h>
#include <xen/softirq.h>
+#include <xen/keyhandler.h>
#include <mach_apic.h>
@@ -68,6 +69,7 @@ boolean_param("hpetbroadcast", force_hpet_broadcast);
static bool_t __read_mostly hpet_verbose;
static bool_t __read_mostly hpet_debug;
+static bool_t __initdata hpet_debug_tick;
static void __init parse_hpet_param(char * s)
{
char *ss;
@@ -90,6 +92,8 @@ static void __init parse_hpet_param(char * s)
if ( val )
hpet_verbose = 1;
}
+ else if ( !strcmp(s, "tick") )
+ hpet_debug_tick = val;
s = ss + 1;
} while ( ss );
@@ -795,6 +799,33 @@ int hpet_legacy_irq_tick(void)
static u32 *hpet_boot_cfg;
+static void do_hpet_dump_state(unsigned char key)
+{
+ unsigned i;
+ printk("'%c' pressed - dumping HPET state\n", key);
+
+ for ( i = 0; i < num_hpets_used; ++i )
+ dump_hpet_timer(i);
+}
+
+static struct keyhandler hpet_dump_state = {
+ .irq_callback = 0,
+ .u.fn = do_hpet_dump_state,
+ .desc = "Dump hpet state"
+};
+
+static struct timer hpet_dbg_tick;
+static void hpet_dbg_tick_fn(void *data)
+{
+ static s_time_t last = 0;
+ s_time_t now = NOW();
+
+ printk("In HPET debug tick. Time is %"PRId64", delta is %"PRId64"\n",
+ now, now - last);
+ set_timer(&hpet_dbg_tick, now + SECONDS(5));
+ last = now;
+}
+
u64 __init hpet_setup(void)
{
static u64 __initdata hpet_rate;
@@ -852,6 +883,14 @@ u64 __init hpet_setup(void)
hpet_rate = 1000000000000000ULL; /* 10^15 */
(void)do_div(hpet_rate, hpet_period);
+ register_keyhandler('1', &hpet_dump_state);
+
+ if ( hpet_debug_tick )
+ {
+ init_timer(&hpet_dbg_tick, hpet_dbg_tick_fn, NULL, 0);
+ set_timer(&hpet_dbg_tick, NOW() + SECONDS(5));
+ }
+
return hpet_rate;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
@ 2014-03-06 14:11 ` Tim Deegan
2014-03-06 14:33 ` Jan Beulich
2014-03-06 16:08 ` Jan Beulich
2 siblings, 0 replies; 11+ messages in thread
From: Tim Deegan @ 2014-03-06 14:11 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Frediano Ziglio, Keir Fraser, Jan Beulich, Xen-devel
At 15:43 +0000 on 05 Mar (1394030623), Andrew Cooper wrote:
> This involves rewriting most of the MSI related HPET code, and as a result
> this patch looks very complicated. It is probably best viewed as an end
> result, with the following notes explaining what is going on.
>
> The new logic is as follows:
> * A single high priority vector is allocated and uses on all cpus.
> * Reliance on the irq infrastructure is completely removed.
> * Tracking of free hpet channels has changed. It is now an individual
> bitmap, and allocation is based on winning a test_and_clear_bit()
> operation.
> * There is a notion of strict ownership of hpet channels.
> ** A cpu which owns an HPET channel can program it for a desired deadline.
> ** A cpu which can't find a free HPET channel will have to share.
>
> ** If an HPET firing at an appropriate time can be found (up to 20us late), a
> CPU will simply request to be woken up with that HPET.
> ** Failing finding an appropriate timed HPET, a CPU shall find the soonest
> late HPET and program it earlier.
> ** Failing any late HPETs, a CPU shall wake up with the latest early HPET it
> can find.
> ** Failing all else, a CPU shall retry to find a free HPET. This guarantees
> that a CPU will never leave hpet_broadcast_enter() without arranging an
> interrupt.
> * Some functions have been renamed to be more descriptive. Some functions
> have parameters changed to be more consistent.
>
> Contains a folded half bugfix from Frediano:
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
Reviewed-by: Tim Deegan <tim@xen.org>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
2014-03-06 14:11 ` Tim Deegan
@ 2014-03-06 14:33 ` Jan Beulich
2014-03-06 14:40 ` Andrew Cooper
2014-03-06 16:08 ` Jan Beulich
2 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2014-03-06 14:33 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Tim Deegan, Frediano Ziglio, Keir Fraser, Xen-devel
>>> On 05.03.14 at 16:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> ** If an HPET firing at an appropriate time can be found (up to 20us late), a
> CPU will simply request to be woken up with that HPET.
With exit latencies from C1/C1E in the range of 1...10us, 20us seems
like a lot of additional latency added here.
> ** Failing finding an appropriate timed HPET, a CPU shall find the soonest
> late HPET and program it earlier.
> ** Failing any late HPETs, a CPU shall wake up with the latest early HPET it
> can find.
And do what?
> ** Failing all else, a CPU shall retry to find a free HPET. This guarantees
> that a CPU will never leave hpet_broadcast_enter() without arranging an
> interrupt.
For how long? Indefinitely (i.e. until the wakeup time is reached)?
All without having looked at the details of the patch yet.
Jan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-06 14:33 ` Jan Beulich
@ 2014-03-06 14:40 ` Andrew Cooper
2014-03-06 15:38 ` Jan Beulich
0 siblings, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2014-03-06 14:40 UTC (permalink / raw)
To: Jan Beulich; +Cc: Tim Deegan, Frediano Ziglio, Keir Fraser, Xen-devel
On 06/03/14 14:33, Jan Beulich wrote:
>>>> On 05.03.14 at 16:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> ** If an HPET firing at an appropriate time can be found (up to 20us late), a
>> CPU will simply request to be woken up with that HPET.
> With exit latencies from C1/C1E in the range of 1...10us, 20us seems
> like a lot of additional latency added here.
Well - it is down from 50us, and also not ahead of time. Any narrowing
of this window does mean higher contention when fighting over the
remaining hpets.
>
>> ** Failing finding an appropriate timed HPET, a CPU shall find the soonest
>> late HPET and program it earlier.
>> ** Failing any late HPETs, a CPU shall wake up with the latest early HPET it
>> can find.
> And do what?
Wake up early, in the hope that the hpet arrangements are different when
it next comes to look
>
>> ** Failing all else, a CPU shall retry to find a free HPET. This guarantees
>> that a CPU will never leave hpet_broadcast_enter() without arranging an
>> interrupt.
> For how long? Indefinitely (i.e. until the wakeup time is reached)?
>
> All without having looked at the details of the patch yet.
>
> Jan
>
Forever. There are certain sleep paths which cannot be aborted by this
point, so exiting without having set up a wakup is not an option.
A different option would be to make all sleep paths abortable, at which
point my v4 series would be appropriate (plus spinlock bugfix)
~Andrew
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-06 14:40 ` Andrew Cooper
@ 2014-03-06 15:38 ` Jan Beulich
0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-03-06 15:38 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Tim Deegan, Frediano Ziglio, Keir Fraser, Xen-devel
>>> On 06.03.14 at 15:40, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 06/03/14 14:33, Jan Beulich wrote:
>>>>> On 05.03.14 at 16:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> ** Failing all else, a CPU shall retry to find a free HPET. This guarantees
>>> that a CPU will never leave hpet_broadcast_enter() without arranging an
>>> interrupt.
>> For how long? Indefinitely (i.e. until the wakeup time is reached)?
>>
>> All without having looked at the details of the patch yet.
>
> Forever. There are certain sleep paths which cannot be aborted by this
> point, so exiting without having set up a wakup is not an option.
>
> A different option would be to make all sleep paths abortable, at which
> point my v4 series would be appropriate (plus spinlock bugfix)
Or simply always force another channel to an earlier wakeup.
But then again - how would we get into that state in the first
place? There can't be neither late nor early channels. If this is
really just to cope with possible races (which the description
didn't say), then I think I'm fine with the abstract approach.
Jan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
2014-03-06 14:11 ` Tim Deegan
2014-03-06 14:33 ` Jan Beulich
@ 2014-03-06 16:08 ` Jan Beulich
2 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-03-06 16:08 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Tim Deegan, Frediano Ziglio, Keir Fraser, Xen-devel
>>> On 05.03.14 at 16:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> +/* Wake up all cpus in the channel mask. Lock should be held. */
> +static void hpet_wake_cpus(struct hpet_event_channel *ch)
> {
> - unsigned int cpu = smp_processor_id();
> -
> - if ( cpumask_test_and_clear_cpu(cpu, mask) )
> - raise_softirq(TIMER_SOFTIRQ);
> -
> - cpuidle_wakeup_mwait(mask);
> -
> - if ( !cpumask_empty(mask) )
> - cpumask_raise_softirq(mask, TIMER_SOFTIRQ);
> + cpuidle_wakeup_mwait(ch->cpumask);
> + cpumask_raise_softirq(ch->cpumask, TIMER_SOFTIRQ);
> }
>
> -static void handle_hpet_broadcast(struct hpet_event_channel *ch)
> +/* HPET interrupt handler. Wake all requested cpus. Lock should be held.
> */
> +static void hpet_interrupt_handler(struct hpet_event_channel *ch)
> {
> - cpumask_t mask;
> - s_time_t now, next_event;
> - unsigned int cpu;
> - unsigned long flags;
> -
> - spin_lock_irqsave(&ch->lock, flags);
> -
> -again:
> - ch->next_event = STIME_MAX;
> -
> - spin_unlock_irqrestore(&ch->lock, flags);
> -
> - next_event = STIME_MAX;
> - cpumask_clear(&mask);
> - now = NOW();
> -
> - /* find all expired events */
> - for_each_cpu(cpu, ch->cpumask)
> - {
> - s_time_t deadline;
> -
> - rmb();
> - deadline = per_cpu(timer_deadline, cpu);
> - rmb();
> - if ( !cpumask_test_cpu(cpu, ch->cpumask) )
> - continue;
> -
> - if ( deadline <= now )
> - cpumask_set_cpu(cpu, &mask);
> - else if ( deadline < next_event )
> - next_event = deadline;
> - }
> -
> - /* wakeup the cpus which have an expired event. */
> - evt_do_broadcast(&mask);
> -
> - if ( next_event != STIME_MAX )
> - {
> - spin_lock_irqsave(&ch->lock, flags);
> -
> - if ( next_event < ch->next_event &&
> - hpet_program_time(ch, next_event, now, 0) )
> - goto again;
> -
> - spin_unlock_irqrestore(&ch->lock, flags);
> - }
> + hpet_wake_cpus(ch);
> + raise_softirq(TIMER_SOFTIRQ);
hpet_wake_cpus() just did a cpumask_raise_softirq()?
> +static void __init hpet_init_channel(struct hpet_event_channel *ch)
> {
> - int irq;
> -
> - if ( (irq = create_irq(NUMA_NO_NODE)) < 0 )
> - return irq;
> + u64 hpet_rate = hpet_setup();
>
> - ch->msi.irq = irq;
> - if ( hpet_setup_msi_irq(ch) )
> - {
> - destroy_irq(irq);
> - return -EINVAL;
> - }
> + /*
> + * The period is a femto seconds value. We need to calculate the scaled
> + * math multiplication factor for nanosecond to hpet tick conversion.
> + */
> + ch->mult = div_sc((unsigned long)hpet_rate,
> + 1000000000ul, 32);
> + ch->shift = 32;
> + ch->next_event = STIME_MAX;
> + spin_lock_init(&ch->lock);
>
> - return 0;
> + ch->msi.irq = -1;
> + ch->msi.msi_attrib.maskbit = 1;
> + ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
I agree that this should be kept for consistency, but it reminds me
to ask whether you have anything in mind to make the MSI related
information dumpable again now that it's disconnected from the
IRQ system (and hence invisible to dump_msi()). Not even the
subsequent optional debugging patches appear to be doing that.
And if it's not easily integrateable, I wonder whether the msi field
of struct hpet_event_channel is really still needed. (If not,
replacing with the still used fields should probably be a follow-up
patch unless the one here is touching [almost] all relevant places
already anyway.)
> void hpet_broadcast_enter(void)
> {
> unsigned int cpu = smp_processor_id();
> - struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
> + struct hpet_event_channel *ch = this_cpu(hpet_channel);
> + s_time_t deadline = this_cpu(timer_deadline);
> +
> + ASSERT(!local_irq_is_enabled());
> + ASSERT(ch == NULL);
>
> - if ( per_cpu(timer_deadline, cpu) == 0 )
> + if ( deadline == 0 )
> return;
>
> - if ( !ch )
> - ch = hpet_get_channel(cpu);
> + /* If using HPET in legacy timer mode */
> + if ( num_hpets_used == 0 )
> + {
> + spin_lock(&hpet_events->lock);
>
> - ASSERT(!local_irq_is_enabled());
> + cpumask_set_cpu(cpu, hpet_events->cpumask);
> + if ( deadline < hpet_events->next_event )
> + hpet_program_time(hpet_events, deadline, NOW(), 1);
> +
> + spin_unlock(&hpet_events->lock);
> + return;
> + }
> +
> +retry_free_channel:
> + ch = hpet_get_free_channel();
> +
> + if ( ch )
> + {
> + spin_lock(&ch->lock);
> +
> + /* This really should be an MSI channel by this point */
> + ASSERT(!(ch->flags & HPET_EVT_LEGACY));
> +
> + hpet_msi_mask(ch);
> +
> + ch->cpu = cpu;
> + this_cpu(hpet_channel) = ch;
> + cpumask_set_cpu(cpu, ch->cpumask);
> +
> + hpet_setup_msi(ch);
> + hpet_program_time(ch, deadline, NOW(), 1);
> + hpet_msi_unmask(ch);
> +
> + spin_unlock(&ch->lock);
> + }
> + else
> + {
> + s_time_t best_early_deadline = 0, best_late_deadline = STIME_MAX;
> + unsigned int i, best_early_idx = -1, best_late_idx = -1;
> +
> + for ( i = 0; i < num_hpets_used; ++i )
> + {
> + ch = &hpet_events[i];
> + spin_lock(&ch->lock);
> +
> + if ( ch->cpu == -1 )
> + goto continue_search;
> +
> + /* This channel is going to expire early */
> + if ( ch->next_event < deadline )
> + {
> + if ( ch->next_event > best_early_deadline )
> + {
> + best_early_idx = i;
> + best_early_deadline = ch->next_event;
> + }
> + goto continue_search;
> + }
> +
> + /* We can deal with being woken up 20us late */
> + if ( ch->next_event <= deadline + MICROSECS(20) )
> + break;
Hmm, no - the loop only has a handful of iterations, so I don't see
a need to try to bail early. If there is a better one, we should use
it.
Furthermore, if you initialized best_late_deadline to deadline +
MICROSECS(20) right away, you wouldn't need an extra
conditional here at all.
>
> - if ( !(ch->flags & HPET_EVT_LEGACY) )
> - hpet_attach_channel(cpu, ch);
> + /* Otherwise record the best late channel to program forwards */
> + if ( ch->next_event <= best_late_deadline )
> + {
> + best_late_idx = i;
> + best_late_deadline = ch->next_event;
> + }
> +
> + continue_search:
> + spin_unlock(&ch->lock);
> + ch = NULL;
> + }
> +
> + if ( ch )
> + {
> + /* Found HPET with an appropriate time. Request to be woken up */
> + cpumask_set_cpu(cpu, ch->cpumask);
> + this_cpu(hpet_channel) = ch;
> + spin_unlock(&ch->lock);
> + goto done_searching;
> + }
> +
> + /* Try and program the best late channel forwards a bit */
> + if ( best_late_deadline < STIME_MAX && best_late_idx != -1 )
If you follow the above, this become moot anyway, but if not -
why the double condition?
> + {
> + ch = &hpet_events[best_late_idx];
> + spin_lock(&ch->lock);
> +
> + /* If this is still the same channel, good */
> + if ( ch->next_event == best_late_deadline )
I think I commented on this the first time through already: There's
no reason to not use this channel even if the above condition
doesn't hold - as long as the condition we require holds. In
particular, if another CPU already moved the channel to a slightly
earlier wakeup, this might even be beneficial to us (up to saving
us from having to reprogram the channel).
> + {
> + cpumask_set_cpu(cpu, ch->cpumask);
> + hpet_program_time(ch, deadline, NOW(), 1);
hpet_program_time()'s force parameter, btw., only ever gets 1
passed as argument - did you consider removing the pointless
parameter?
> + spin_unlock(&ch->lock);
> + goto done_searching;
> + }
> + /* else it has fired and changed ownership. */
> + else
> + {
> + spin_unlock(&ch->lock);
> + goto retry_free_channel;
> + }
> + }
> +
> + /* Try to piggyback on an early channel in the hope that when we
> + wake back up, our fortunes will improve. */
> + if ( best_early_deadline > 0 && best_early_idx != -1 )
Same question as above regarding the double condition.
> + {
> + ch = &hpet_events[best_early_idx];
> + spin_lock(&ch->lock);
> +
> + /* If this is still the same channel, good */
> + if ( ch->next_event == best_early_deadline )
Similar comment as earlier on.
> + {
> + cpumask_set_cpu(cpu, ch->cpumask);
> + spin_unlock(&ch->lock);
> + goto done_searching;
> + }
> + /* else it has fired and changed ownership. */
> + else
> + {
> + spin_unlock(&ch->lock);
> + goto retry_free_channel;
> + }
> + }
> +
> + /* All else has failed, and we have wasted some time searching.
> + * See whether another channel has become free. */
> + goto retry_free_channel;
> + }
> +
> +done_searching:
>
> /* Disable LAPIC timer interrupts. */
> disable_APIC_timer();
> - cpumask_set_cpu(cpu, ch->cpumask);
> -
> - spin_lock(&ch->lock);
> - /* reprogram if current cpu expire time is nearer */
> - if ( per_cpu(timer_deadline, cpu) < ch->next_event )
> - hpet_program_time(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
> - spin_unlock(&ch->lock);
> }
>
> void hpet_broadcast_exit(void)
> {
> unsigned int cpu = smp_processor_id();
> - struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
> + struct hpet_event_channel *ch = this_cpu(hpet_channel);
In cases where you have latched smp_processor_id() already
anyway, using per_cpu() is actually cheaper than this_cpu().
> +
> + ASSERT(local_irq_is_enabled());
> +
> + if ( this_cpu(timer_deadline) == 0 )
> + return;
>
> - if ( per_cpu(timer_deadline, cpu) == 0 )
> + /* If using HPET in legacy timer mode */
> + if ( num_hpets_used == 0 )
> + {
> + /* This is safe without the spinlock, and will reduce contention. */
> + cpumask_clear_cpu(cpu, hpet_events->cpumask);
> return;
> + }
>
> if ( !ch )
> - ch = hpet_get_channel(cpu);
> + return;
>
> - /* Reprogram the deadline; trigger timer work now if it has passed. */
> - enable_APIC_timer();
> - if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
> - raise_softirq(TIMER_SOFTIRQ);
> + spin_lock_irq(&ch->lock);
>
> cpumask_clear_cpu(cpu, ch->cpumask);
>
> - if ( !(ch->flags & HPET_EVT_LEGACY) )
> - hpet_detach_channel(cpu, ch);
> + /* If we own the channel, detach it */
> + if ( ch->cpu == cpu )
> + {
> + hpet_msi_mask(ch);
There's an imbalance of mask/unmask operations here. While it
looks like this is correct, it is certainly not efficient - the other call
site of hpet_msi_mask() is then likely to find the channel already
masked, and considering the relatively long time MMIO accesses
take I would think it would be beneficial to at least avoid the
pointless write there is the channel is already masked (for
symmetry the same might then be worthwhile doing also in
hpet_msi_unmask()).
> + hpet_wake_cpus(ch);
> + ch->cpu = -1;
> + set_bit(ch->idx, &free_channels);
Shouldn't you wake others _after_ having detached, so they have
a chance of becoming the owner of the now unused channel?
Also I think you need smp_wmb() between the writing of ch->cpu
and set_bit() - while x86's set_bit() currently implies a barrier(),
this isn't so by definition. Or at least you should add a comment
to explain why no barrier is currently needed.
Jan
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2014-03-06 16:08 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-05 15:43 [RFC v5 0/5] HPET fix interrupt logic Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 1/5] x86/hpet: Pre cleanup Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
2014-03-06 14:11 ` Tim Deegan
2014-03-06 14:33 ` Jan Beulich
2014-03-06 14:40 ` Andrew Cooper
2014-03-06 15:38 ` Jan Beulich
2014-03-06 16:08 ` Jan Beulich
2014-03-05 15:43 ` [PATCH v5 3/5] x86/hpet: Post cleanup Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 4/5] x86/hpet: Debug and verbose hpet logging Andrew Cooper
2014-03-05 15:43 ` [PATCH v5 5/5] x86/hpet: debug keyhandlers Andrew Cooper
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).