public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <marcelo.tosatti@cyclades.com>
To: "M. Edward Borasky" <znmeb@cesmail.net>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Jens Axboe <axboe@suse.de>
Subject: Re: Negative "ios_in_flight" in the 2.4 kernel
Date: Wed, 22 Dec 2004 13:58:16 -0200	[thread overview]
Message-ID: <20041222155816.GF3088@logos.cnet> (raw)
In-Reply-To: <1103728782.26340.34.camel@DreamGate>

On Wed, Dec 22, 2004 at 07:19:42AM -0800, M. Edward Borasky wrote:
> On Wed, 2004-12-22 at 12:16 +0100, Jens Axboe wrote:
> >  
> > > Question: wouldn't a simple refusal to decrement ios_in_flight in
> > > "down_ios" if it's zero fix this, or am I missing something?
> > 
> > That would paper over the real bug, but it will work for you.
> What is the "real bug", then? What will "work for me" is accurate disk
> usage tick counts. The intent of these statistics is something known as
> Operational Analysis of Queueing Networks. 
> 
> The "requirement" is that the operations on each device be accurately
> counted, and the "wall clock" time spent *waiting* for requests and the
> time spent *servicing* requests be accurately accumulated for each
> device. The sector count is a bonus. 
> 
> >From these raw counters, one can, and iostat does, compute throughput,
> utilization, average service time, average wait time and average queue
> length. An excellent and highly readable reference for the math involved
> can be found at
> 
> http://www.cs.washington.edu/homes/lazowska/qsp/Images/Chap_03.pdf
> 
> That is the intent behind these counters, and what will "work for me" is
> a kernel that captures the raw counters correctly. If forcing
> ios_in_flight to be non-negative is done at the expense of losing or
> gaining ticks in the wait or service time accumulators, then it will not
> work for me.

Well something is deaccounting uncorrectly (doh), probably the disk/partition 
accounting logic is doing wrong in some condition, Jens?

void req_merged_io(struct request *req)
{
        struct hd_struct *hd1, *hd2;

        locate_hd_struct(req, &hd1, &hd2);
        if (hd1)
                down_ios(hd1);
        if (hd2)
                down_ios(hd2);
}

void req_finished_io(struct request *req)
{
        struct hd_struct *hd1, *hd2;

        locate_hd_struct(req, &hd1, &hd2);
        if (hd1)
                account_io_end(hd1, req);
        if (hd2)
                account_io_end(hd2, req);
}

We could eliminate that possibility if you ran your tests with a single 
non-partitioned disk, but thats just a guess.

Jens has more of a clue than I certainly.


  reply	other threads:[~2004-12-22 18:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-12-22  5:05 Negative "ios_in_flight" in the 2.4 kernel M. Edward Borasky
2004-12-22 11:16 ` Jens Axboe
2004-12-22 15:19   ` M. Edward Borasky
2004-12-22 15:58     ` Marcelo Tosatti [this message]
2004-12-23  8:08       ` Jens Axboe
2004-12-23 15:30         ` M. Edward Borasky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20041222155816.GF3088@logos.cnet \
    --to=marcelo.tosatti@cyclades.com \
    --cc=axboe@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=znmeb@cesmail.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox