public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@suse.de>
To: Nick Piggin <piggin@cyberone.com.au>
Cc: Miquel van Smoorenburg <miquels@cistron.nl>,
	Andrew Morton <akpm@osdl.org>,
	linux-lvm@sistina.com, linux-kernel@vger.kernel.org,
	thornber@redhat.com
Subject: Re: IO scheduler, queue depth, nr_requests
Date: Thu, 19 Feb 2004 11:21:08 +0100	[thread overview]
Message-ID: <20040219102108.GK27190@suse.de> (raw)
In-Reply-To: <403424A4.3090007@cyberone.com.au>

On Thu, Feb 19 2004, Nick Piggin wrote:
> 
> 
> Miquel van Smoorenburg wrote:
> 
> >
> >No, I'm actually referring to a struct request. I'm logging this in the
> >SCSI layer, in scsi_request_fn(), just after elv_next_request(). I have
> >in fact logged all the bio's submitted to __make_request, and the output
> >of the elevator from elv_next_request(). The bio's are submitted 
> >sequentially,
> >the resulting requests aren't. But this is because nr_requests is 128, 
> >while
> >the 3ware device has a queue of 254 entries (no tagging though). Upping
> >nr_requests to 512 makes this go away ..
> >
> >That shouldn't be necessary though. I only see this with LVM over 
> >3ware-raid5,
> >not on the 3ware-raid5 array directly (/dev/sda1). And it gets less 
> >troublesome
> >with a lot of debugging (unless I set nr_requests lower again), which 
> >points
> >to a timing issue.
> >
> >
> 
> So the problem you are seeing is due to "unlucky" timing between
> two processes submitting IO. And the very efficient mechanisms
> (merging, sorting) we have to improve situations exactly like this
> is effectively disabled. And to make it worse, it appears that your
> controller shits itself on this trivially simple pattern.
> 
> Your hack makes a baby step in the direction of per *process*
> request limits, which I happen to be an advocate of. As it stands
> though, I don't like it.

I'm very much an advocate for per process request limits as well.  Would
be trivial to add... Miquels patch is horrible, I appreciate it being
posted as a cry for help.

-- 
Jens Axboe


  reply	other threads:[~2004-02-19 10:21 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20040216131609.GA21974@cistron.nl>
     [not found] ` <20040216133047.GA9330@suse.de>
     [not found]   ` <20040217145716.GE30438@traveler.cistron.net>
2004-02-18 23:52     ` IO scheduler, queue depth, nr_requests Miquel van Smoorenburg
2004-02-19  1:24       ` Nick Piggin
2004-02-19  1:52         ` Miquel van Smoorenburg
2004-02-19  2:01           ` Nick Piggin
2004-02-19  1:26       ` Andrew Morton
2004-02-19  2:11         ` Miquel van Smoorenburg
2004-02-19  2:26           ` Andrew Morton
2004-02-19 10:15             ` Miquel van Smoorenburg
2004-02-19 10:19               ` Jens Axboe
2004-02-19 20:59                 ` Miquel van Smoorenburg
2004-02-19 22:52                   ` Nick Piggin
2004-02-19 23:53                     ` Miquel van Smoorenburg
2004-02-20  0:15                       ` Nick Piggin
2004-02-20  1:12                       ` [PATCH] per process request limits (was Re: IO scheduler, queue depth, nr_requests) Nick Piggin
2004-02-20  1:26                         ` Andrew Morton
2004-02-20  1:40                           ` Nick Piggin
2004-02-20  2:32                             ` Andrew Morton
2004-02-20 14:40                               ` [PATCH] bdi_congestion_funp (was: Re: [PATCH] per process request limits (was Re: IO scheduler, queue depth, nr_requests)) Miquel van Smoorenburg
2004-02-20 14:57                                 ` Jens Axboe
2004-02-20 14:59                                 ` Joe Thornber
2004-02-20 15:00                                   ` Jens Axboe
2004-02-22 14:02                                     ` Miquel van Smoorenburg
2004-02-22 19:55                                       ` Andrew Morton
2004-02-20  1:45                         ` [PATCH] per process request limits (was Re: IO scheduler, queue depth, nr_requests) Nick Piggin
2004-02-19  2:51           ` IO scheduler, queue depth, nr_requests Nick Piggin
2004-02-19 10:21             ` Jens Axboe [this message]
     [not found] <1qJVx-75K-15@gated-at.bofh.it>
     [not found] ` <1qJVx-75K-17@gated-at.bofh.it>
     [not found]   ` <1qJVw-75K-11@gated-at.bofh.it>
     [not found]     ` <1qLb8-6m-27@gated-at.bofh.it>
     [not found]       ` <1qLXl-XV-17@gated-at.bofh.it>
     [not found]         ` <1qMgF-1dA-5@gated-at.bofh.it>
     [not found]           ` <1qTs3-7A2-51@gated-at.bofh.it>
     [not found]             ` <1qTBB-7Hh-7@gated-at.bofh.it>
     [not found]               ` <1r3AS-1hW-5@gated-at.bofh.it>
     [not found]                 ` <1r5jD-2RQ-31@gated-at.bofh.it>
     [not found]                   ` <1r6fH-3L8-11@gated-at.bofh.it>
     [not found]                     ` <1r6S4-6cv-1@gated-at.bofh.it>
2004-02-25 20:17                       ` Bill Davidsen
2004-02-25 21:39                         ` Miquel van Smoorenburg
2004-02-26  0:39                         ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040219102108.GK27190@suse.de \
    --to=axboe@suse.de \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-lvm@sistina.com \
    --cc=miquels@cistron.nl \
    --cc=piggin@cyberone.com.au \
    --cc=thornber@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox