public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* plugging in 2.4. Does it work?
@ 2001-02-20 22:41 Peter T. Breuer
  2001-02-20 22:54 ` Jens Axboe
  2001-02-20 22:58 ` Jens Axboe
  0 siblings, 2 replies; 11+ messages in thread
From: Peter T. Breuer @ 2001-02-20 22:41 UTC (permalink / raw)
  To: linux kernel

More like "how does one get it to work".

Does anyone have a simple recipe for doing plugging right in 2.4?
I'm doing something wrong.

When I disable plugging on my block driver (by registering a no-op
plugging function), the driver works fine.  In particular my end_request
code works fine - it does an if end_that_request_first return;
end_that_request_last on the request to murder. Here it is

 int my_end_request(struct request *req) {
   unsigned long flags; int dequeue = 0;
   spin_lock_irqsave(&io_request_lock, flags);
   if (!req->errors) {
     while (req->nr_sectors > 0) {
       printk( KERN_DEBUG "running end_first on req with %d sectors\n",
              req->nr_sectors);
       if (!end_that_request_first (req, !req->errors, DEVICE_NAME))
         break;
     }
   }
   printk( KERN_DEBUG "running end_first on req with %d sectors\n",
            req->nr_sectors);
   if (!end_that_request_first (req, !req->errors, DEVICE_NAME)) {
     printk( KERN_DEBUG "running end_last on req with %d sectors\n",
              req->nr_sectors);
     end_that_request_last(req);
     dequeue = 1;
   }
   spin_unlock_irqrestore(&io_request_lock, flags);
   return dequeue;
 }

When I allow the kernel to use its default plugging function and
enable read-ahead of 20, I see read requests being aggregated
10-in-one with 1K blocks, but the userspace request hangs, or
the calling process dies! Or worse. Sometimes I get some
recordable logs .. and they show nothing wrong:

Feb 20 10:46:42 barney kernel: running end_first on req with 20 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 18 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 16 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 14 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 12 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 10 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 8 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 6 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 4 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 2 sectors
Feb 20 10:46:42 barney kernel: running end_first on req with 2 sectors
Feb 20 10:46:42 barney kernel: running end_last on req with 2 sectors
Feb 20 10:52:47 barney kernel: running end_first on req with 2 sectors
Feb 20 10:52:47 barney kernel: running end_first on req with 2 sectors
Feb 20 10:52:47 barney kernel: running end_last on req with 2 sectors

But death follows soomer rather than later.

I've discovered that

1) setting read-ahead to 0 disables request agregation by some means of
which I am not aware, and everything goes hunky dory.

2) setting read-ahead to 4 or 8 seems to be safe. I see 4K requests
being formed and treated OK.

3) disabling plugging stops request aggretaion and makes everything
safe.

Any clues? Is the trick just "powers of 2"? how is one supposed to
handle plugging? Where is the canonical example. I can't see any driver
that does it.

Peter


^ permalink raw reply	[flat|nested] 11+ messages in thread
* Re: plugging in 2.4. Does it work?
@ 2001-02-21 15:36 Peter T. Breuer
  2001-02-21 17:27 ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Peter T. Breuer @ 2001-02-21 15:36 UTC (permalink / raw)
  To: linux kernel


"Jens Axboe wrote:"
> It will still cluster, the code above checks if the next bh is
> contigious -- if it isn't, then check if we can grow another segment.
> So you may be lucky that some buffer_heads in the chain are indeed
> contiguous, that's what the segment count is for. This is exactly
> the same in 2.4.

OK .. that fixed it. Turned out that I wasn't walking the request bh's
on read of clustered requests (but I was doing so on write).

Reads now work fine, but writes show signs of having some problem. I'll
beat on that later.

I'm particularly concerned about the error behaviour. How should I set
up the end_request code in the case when the request is to be errored?
Recall that my  end_request code is presently like this:

     io_spin_lock
     while (end_that_request_first(req,!req->errors);
     // one more time for luck
     if (!end_that_request_first(req,!req->errors)
        end_that_request_last(req);
     io_spin_unlock

and I get the impression from other driver code snippets that a single
end_that_request_first is enough, but looking at the implementation it
can't be. It looks from ll_rw_blk that I should walk the bh chain just
the same in the case of error, no?

Peter

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2001-02-21 17:55 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-02-20 22:41 plugging in 2.4. Does it work? Peter T. Breuer
2001-02-20 22:54 ` Jens Axboe
2001-02-20 23:27   ` Peter T. Breuer
2001-02-20 23:37     ` Jens Axboe
2001-02-20 23:48       ` Peter T. Breuer
2001-02-20 23:52         ` Jens Axboe
2001-02-20 22:58 ` Jens Axboe
2001-02-20 23:32   ` Peter T. Breuer
  -- strict thread matches above, loose matches on Subject: below --
2001-02-21 15:36 Peter T. Breuer
2001-02-21 17:27 ` Jens Axboe
2001-02-21 17:54   ` Peter T. Breuer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox