public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Petr Vandrovec" <VANDROVE@vc.cvut.cz>
To: Jens Axboe <axboe@suse.de>
Cc: Pavel Machek <pavel@ucw.cz>, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] make nbd working in 2.5.x
Date: Wed, 5 Mar 2003 12:37:21 +0100	[thread overview]
Message-ID: <11E385963D76@vcnet.vc.cvut.cz> (raw)

On  5 Mar 03 at 10:21, Jens Axboe wrote:
> On Mon, Mar 03 2003, Petr Vandrovec wrote:
> > On  3 Mar 03 at 19:39, Pavel Machek wrote:
> > 
> > I think that at the beginning of 2.5.x series there was some thinking
> > about removing end_that_request* completely from the API. As it never
> > happened, and __end_that_request_first()/end_that_request_last() has 
> > definitely better quality (like that it does not ignore req->waiting...)
> > than opencoded nbd loop, I prefer using end_that_request* over opencoding
> > bio traversal.
> > 
> > If you want, then just replace blk_put_request() with __blk_put_request(),
> > instead of first change. But I personally will not trust such code, as 
> > next time something in bio changes nbd will miss this change again.
> 
> I agree with the change, there's no reason for nbd to implement its own
> end_request handling. I was the one to do the bio conversion, doing XXX
> drivers at one time...
> 
> A small correction to your patch, you need not hold queue_lock when
> calling end_that_request_first() (which is the costly part of ending a
> request), so
> 
>     if (!end_that_request_first(req, uptodate, req->nr_sectors)) {
>         unsigned long flags;
> 
>         spin_lock_irqsave(q->queue_lock, flags);
>         end_that_request_last(req);
>         spin_unlock_irqrestore(q->queue_lock, flags);
>     }
> 
> would be enough. That depends on the driver having pulled the request
> off the list in the first place, which nbd has.

But it also finishes whole request at once, so probably with:

if (!end_that_request_first(...)) { 
   ...
} else {
  BUG();
}

I had patch for 2.5.3 which finished request partially after each chunk
(usually 1500 bytes) received from server, but it did not make any 
difference in performance at that time (probably because of the way
nbd server works and speed of network between server and client). I'll 
try it now again...
                                                    Petr Vandrovec
                                                    vandrove@vc.cvut.cz
                                                    


             reply	other threads:[~2003-03-05 11:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-05 11:37 Petr Vandrovec [this message]
2003-03-05 11:38 ` [PATCH] make nbd working in 2.5.x Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2003-03-03 18:52 Petr Vandrovec
2003-03-05  9:21 ` Jens Axboe
2003-03-03 17:29 Petr Vandrovec
2003-03-03 18:39 ` Pavel Machek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=11E385963D76@vcnet.vc.cvut.cz \
    --to=vandrove@vc.cvut.cz \
    --cc=axboe@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pavel@ucw.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox