qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gleb Natapov <gleb@qumranet.com>
To: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Re: [PATCH 2/3] Fix time drift problem under high load when PIT is in use.
Date: Sun, 29 Jun 2008 18:52:35 +0300	[thread overview]
Message-ID: <20080629155235.GC12972@minantech.com> (raw)
In-Reply-To: <48679EEB.7020500@web.de>

On Sun, Jun 29, 2008 at 04:40:43PM +0200, Jan Kiszka wrote:
> Gleb Natapov wrote:
> > Count the number of interrupts that was lost due to interrupt coalescing
> > and re-inject them back when possible. This fixes time drift problem when
> > pit is used as a time source.
> > 
> > Signed-off-by: Gleb Natapov <gleb@qumranet.com>
> > ---
> > 
> >  hw/i8254.c |   20 +++++++++++++++++++-
> >  1 files changed, 19 insertions(+), 1 deletions(-)
> > 
> > diff --git a/hw/i8254.c b/hw/i8254.c
> > index 4813b03..c4f0f46 100644
> > --- a/hw/i8254.c
> > +++ b/hw/i8254.c
> > @@ -61,6 +61,8 @@ static PITState pit_state;
> >  
> >  static void pit_irq_timer_update(PITChannelState *s, int64_t current_time);
> >  
> > +static uint32_t pit_irq_coalesced;
> > +
> >  static int pit_get_count(PITChannelState *s)
> >  {
> >      uint64_t d;
> > @@ -369,12 +371,28 @@ static void pit_irq_timer_update(PITChannelState *s, int64_t current_time)
> >          return;
> >      expire_time = pit_get_next_transition_time(s, current_time);
> >      irq_level = pit_get_out1(s, current_time);
> > -    qemu_set_irq(s->irq, irq_level);
> > +    if(irq_level) {
> > +        if(!qemu_irq_raise(s->irq))
> > +            pit_irq_coalesced++;
> > +    } else {
> > +        qemu_irq_lower(s->irq);
> > +        if(pit_irq_coalesced > 0) {
> > +            if(qemu_irq_raise(s->irq))
> > +                pit_irq_coalesced--;
> > +            qemu_irq_lower(s->irq);
> > +        }
> > +    }
> 
> That's graspable for my poor brain: reinject one coalesced IRQ right
> after the falling edge of an in-time delivery...
This works because I speed up timer that calls this function. I can
create another timer just for injection of lost interrupts instead of
doing it here.

> 
> > +
> >  #ifdef DEBUG_PIT
> >      printf("irq_level=%d next_delay=%f\n",
> >             irq_level,
> >             (double)(expire_time - current_time) / ticks_per_sec);
> >  #endif
> > +    if(pit_irq_coalesced && expire_time != -1) {
> > +        uint32_t div = ((pit_irq_coalesced >> 10) & 0x7f) + 2;
> > +        expire_time -= ((expire_time - current_time) / div);
> > +    }
> > +
> 
> ... but could you comment on this algorithm? I think I got what it does:
> splitting up the next regular period in short intervals. But there are a
> bit too many magic numbers involved. Where do they come from? What
> happens if pit_irq_coalesced becomes large (or: how large can it become
> without risking an IRQ storm on the guest)? A few comments would help, I
> guess.
> 

The numbers are come from empirical data :) The algorithm divide next
time interval by value that depends on number of coalesced interrupts.
The divider value can be any number between 2 and 129. The formula is:
div = (pit_irq_coalesced / A) % B + 2
I chose A and B so that bit operation could be used instead of
arithmetic.

--
			Gleb.

  reply	other threads:[~2008-06-29 15:52 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-06-29 14:02 [Qemu-devel] [PATCH 0/3] Fix guest time drift under heavy load Gleb Natapov
2008-06-29 14:02 ` [Qemu-devel] [PATCH 1/3] Change qemu_set_irq() to return status information Gleb Natapov
2008-06-29 14:14   ` Avi Kivity
2008-06-29 14:18     ` Gleb Natapov
2008-06-29 14:53       ` Paul Brook
2008-06-29 15:37         ` Gleb Natapov
2008-06-29 14:38   ` Paul Brook
2008-06-29 15:40     ` Gleb Natapov
2008-06-29 18:11       ` Paul Brook
2008-06-29 19:44         ` Gleb Natapov
2008-06-29 20:34           ` Paul Brook
2008-06-29 20:49             ` Gleb Natapov
2008-06-30 13:26               ` Gleb Natapov
2008-06-30 14:00                 ` Paul Brook
2008-06-30 14:28                   ` Gleb Natapov
2008-06-30 14:35                     ` Paul Brook
2008-06-29 20:58             ` [Qemu-devel] " Jan Kiszka
2008-06-29 21:16               ` Paul Brook
2008-06-29 21:42                 ` Dor Laor
2008-06-29 21:47                 ` Jan Kiszka
2008-06-29 21:54                   ` Paul Brook
2008-06-30 13:18               ` Gleb Natapov
2008-06-29 14:02 ` [Qemu-devel] [PATCH 2/3] Fix time drift problem under high load when PIT is in use Gleb Natapov
2008-06-29 14:40   ` [Qemu-devel] " Jan Kiszka
2008-06-29 15:52     ` Gleb Natapov [this message]
2008-06-29 14:02 ` [Qemu-devel] [PATCH 3/3] Fix time drift problem under high load when RTC " Gleb Natapov
2008-06-29 14:37 ` [Qemu-devel] Re: [PATCH 0/3] Fix guest time drift under heavy load Jan Kiszka
     [not found] <20080629135455.5447.90849.stgit@gleb-debian.qumranet.com.qumranet.com>
     [not found] ` <20080629135917.5447.7163.stgit@gleb-debian.qumranet.com.qumranet.com>
2008-06-29 20:56   ` [Qemu-devel] Re: [PATCH 2/3] Fix time drift problem under high load when PIT is in use Dor Laor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080629155235.GC12972@minantech.com \
    --to=gleb@qumranet.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).