From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
Tim Deegan <tim@xen.org>
Subject: [Patch v4 1/5] x86/hpet: Pre cleanup
Date: Wed, 13 Nov 2013 17:59:10 +0000 [thread overview]
Message-ID: <1384365554-11017-2-git-send-email-andrew.cooper3@citrix.com> (raw)
In-Reply-To: <1384365554-11017-1-git-send-email-andrew.cooper3@citrix.com>
These changes are ones which are able to be pulled out of the subsequent
patch, to make it clearer to understand and review.
They are all misc fixes with negligible functional changes.
* Rename hpet_next_event -> hpet_set_counter and convert it to take an
hpet_event_channel pointer rather than a timer index.
* Rename reprogram_hpet_evt_channel -> hpet_program_time
* Move the position of setting up HPET_EVT_LEGACY in hpet_broadcast_init() It
didn't need to be there.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
xen/arch/x86/hpet.c | 31 +++++++++++++++++++------------
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 3a4f7e8..fd44582 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -94,7 +94,12 @@ static inline unsigned long ns2ticks(unsigned long nsec, int shift,
return (unsigned long) tmp;
}
-static int hpet_next_event(unsigned long delta, int timer)
+/*
+ * Program an HPET channels counter relative to now. 'delta' is specified in
+ * ticks, and should be calculated with ns2ticks(). The channel lock should
+ * be taken and interrupts must be disabled.
+ */
+static int hpet_set_counter(struct hpet_event_channel *ch, unsigned long delta)
{
uint32_t cnt, cmp;
unsigned long flags;
@@ -102,7 +107,7 @@ static int hpet_next_event(unsigned long delta, int timer)
local_irq_save(flags);
cnt = hpet_read32(HPET_COUNTER);
cmp = cnt + delta;
- hpet_write32(cmp, HPET_Tn_CMP(timer));
+ hpet_write32(cmp, HPET_Tn_CMP(ch->idx));
cmp = hpet_read32(HPET_COUNTER);
local_irq_restore(flags);
@@ -110,9 +115,12 @@ static int hpet_next_event(unsigned long delta, int timer)
return ((cmp + 2 - cnt) > delta) ? -ETIME : 0;
}
-static int reprogram_hpet_evt_channel(
- struct hpet_event_channel *ch,
- s_time_t expire, s_time_t now, int force)
+/*
+ * Set the time at which an HPET channel should fire. The channel lock should
+ * be held.
+ */
+static int hpet_program_time(struct hpet_event_channel *ch,
+ s_time_t expire, s_time_t now, int force)
{
int64_t delta;
int ret;
@@ -143,11 +151,11 @@ static int reprogram_hpet_evt_channel(
delta = max_t(int64_t, delta, MIN_DELTA_NS);
delta = ns2ticks(delta, ch->shift, ch->mult);
- ret = hpet_next_event(delta, ch->idx);
+ ret = hpet_set_counter(ch, delta);
while ( ret && force )
{
delta += delta;
- ret = hpet_next_event(delta, ch->idx);
+ ret = hpet_set_counter(ch, delta);
}
return ret;
@@ -209,7 +217,7 @@ again:
spin_lock_irqsave(&ch->lock, flags);
if ( next_event < ch->next_event &&
- reprogram_hpet_evt_channel(ch, next_event, now, 0) )
+ hpet_program_time(ch, next_event, now, 0) )
goto again;
spin_unlock_irqrestore(&ch->lock, flags);
@@ -583,6 +591,8 @@ void __init hpet_broadcast_init(void)
cfg |= HPET_CFG_LEGACY;
n = 1;
+ hpet_events->flags = HPET_EVT_LEGACY;
+
if ( !force_hpet_broadcast )
pv_rtc_handler = handle_rtc_once;
}
@@ -615,9 +625,6 @@ void __init hpet_broadcast_init(void)
hpet_events[i].msi.msi_attrib.maskbit = 1;
hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
}
-
- if ( !num_hpets_used )
- hpet_events->flags = HPET_EVT_LEGACY;
}
void hpet_broadcast_resume(void)
@@ -716,7 +723,7 @@ void hpet_broadcast_enter(void)
spin_lock(&ch->lock);
/* reprogram if current cpu expire time is nearer */
if ( per_cpu(timer_deadline, cpu) < ch->next_event )
- reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
+ hpet_program_time(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
spin_unlock(&ch->lock);
}
--
1.7.10.4
next prev parent reply other threads:[~2013-11-13 17:59 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-13 17:59 [RFC v4 0/5] HPET fix interrupt logic Andrew Cooper
2013-11-13 17:59 ` Andrew Cooper [this message]
2013-11-13 17:59 ` [Patch v4 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts Andrew Cooper
2013-11-14 15:52 ` Tim Deegan
2013-11-14 15:56 ` Andrew Cooper
2013-11-14 16:01 ` [Patch v5 " Andrew Cooper
2013-11-22 15:45 ` Jan Beulich
2013-11-22 16:23 ` Andrew Cooper
2013-11-22 16:49 ` Jan Beulich
2013-11-22 17:38 ` Andrew Cooper
2013-11-25 7:52 ` Jan Beulich
2013-11-25 7:50 ` Jan Beulich
2013-11-26 18:32 ` Andrew Cooper
2013-11-27 8:35 ` Jan Beulich
2013-11-27 22:37 ` Andrew Cooper
2013-11-28 14:33 ` Jan Beulich
2013-11-28 15:06 ` Andrew Cooper
2013-11-13 17:59 ` [Patch v4 3/5] x86/hpet: Post cleanup Andrew Cooper
2013-11-13 17:59 ` [Patch v4 4/5] x86/hpet: Debug and verbose hpet logging Andrew Cooper
2013-11-13 17:59 ` [Patch v4 5/5] x86/hpet: debug keyhandlers Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1384365554-11017-2-git-send-email-andrew.cooper3@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=keir@xen.org \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).