public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@osdl.org>
To: Peter Osterlund <petero2@telia.com>
Cc: axboe@suse.de, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Speed up the cdrw packet writing driver
Date: Tue, 24 Aug 2004 14:47:07 -0700	[thread overview]
Message-ID: <20040824144707.100e0cfd.akpm@osdl.org> (raw)
In-Reply-To: <m3hdqsckoo.fsf@telia.com>

Peter Osterlund <petero2@telia.com> wrote:
>
> Jens Axboe <axboe@suse.de> writes:
> 
> > On Mon, Aug 23 2004, Peter Osterlund wrote:
> > > Jens Axboe <axboe@suse.de> writes:
> > > 
> > > > On Sat, Aug 14 2004, Peter Osterlund wrote:
> > > > > 
> > > > > This patch replaces the pd->bio_queue linked list with an rbtree.  The
> > > > > list can get very long (>200000 entries on a 1GB machine), so keeping
> > > > > it sorted with a naive algorithm is far too expensive.
> > > > 
> > > > It looks like you are assuming that bio->bi_sector is unique which isn't
> > > > necessarily true. In that respect, list -> rbtree conversion isn't
> > > > trivial (or, at least it requires extra code to handle this).
> > > 
> > > I don't think that is assumed anywhere.
> > > 
> > > [...]
> > 
> > You are right, the code looks fine indeed. The bigger problem is
> > probably that a faster data structure is needed at all, having hundreds
> > of thousands bio's pending for a packet writing device is not nice at
> > all.
> 
> Why is it not nice? If the VM has decided to create 400MB of dirty
> data on a DVD+RW packet device, I don't see a problem with submitting
> all bio's at the same time to the packet device.

We also have a limit on the number of in-flight requests:
/sys/block/sda/queue/nr_requests.  It defaults to 128.

Are you saying that your requests are so huge that each one has 1000 BIOs? 
That would be odd, for an IDE interface.

> The situation happened when I dumped >1GB of data to a DVD+RW disc on
> a 1GB RAM machine. For some reason, the number of pending bio's didn't
> go much larger than 200000 (ie 400MB) even though it could probably
> have gone to 800MB without swapping. The machine didn't feel
> unresponsive during this test.


  reply	other threads:[~2004-08-24 21:45 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-08-14 19:13 [PATCH] Speed up the cdrw packet writing driver Peter Osterlund
2004-08-23 11:43 ` Jens Axboe
2004-08-23 16:07   ` Peter Osterlund
2004-08-24 20:29     ` Jens Axboe
2004-08-24 21:04       ` Peter Osterlund
2004-08-24 21:47         ` Andrew Morton [this message]
2004-08-24 21:57           ` Lee Revell
2004-08-29 13:52             ` Alan Cox
2004-08-24 22:03           ` Peter Osterlund
2004-08-24 22:56             ` Andrew Morton
2004-08-25  5:38               ` Peter Osterlund
2004-08-25  6:50         ` Jens Axboe
2004-08-28  9:59           ` Peter Osterlund
2004-08-28 13:07             ` Jens Axboe
2004-08-28 18:42               ` Peter Osterlund
2004-08-28 19:45                 ` Andrew Morton
2004-08-28 20:57                   ` Peter Osterlund
2004-08-28 21:23                     ` Andrew Morton
2004-08-29 12:17                       ` Peter Osterlund

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040824144707.100e0cfd.akpm@osdl.org \
    --to=akpm@osdl.org \
    --cc=axboe@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=petero2@telia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox