From: Mike Christie <michaelc@cs.wisc.edu>
To: Jens Axboe <axboe@suse.de>
Cc: Jeff Garzik <jgarzik@pobox.com>,
dougg@torque.net, linux-scsi@vger.kernel.org
Subject: Re: [PATCH RFC] yet more struct scsi_lun
Date: Fri, 04 Nov 2005 11:27:45 -0600 [thread overview]
Message-ID: <436B9A11.3040704@cs.wisc.edu> (raw)
In-Reply-To: <20051104073718.GY26049@suse.de>
Jens Axboe wrote:
> On Thu, Nov 03 2005, Mike Christie wrote:
>
>>Jens Axboe wrote:
>>
>>>On the SCSI side, I would suggest just making shost->max_sectors set
>>>q->max_hw_sectors and leave q->max_sectors to some generic kernel-wide
>>>block layer define (of course making sure that ->max_sectors <=
>>
>>If the value for this block layer define was around 16,000 sectors,
>>would that be ok? The reason I ask is becuase when I get .....
>
>
> Nope :)
Didn't think so :)
>
>
>>>->max_hw_sectors). That's the easy part.
>>>
>>>The bio_add_page() stuff is a little trickier, since it wants to know if
>>>this is fs or 'generic' io. For fs io, we would like to cap the building
>>>of the bio to ->max_sectors, but for eg SG_IO issued io it should go as
>>>high as ->max_hw_sectors. Perhaps the easiest is just to have
>>>bio_fs_add_page() and bio_pc_add_page(), each just passing in the max
>>>value as an integer to bio_add_page(). But it's not exactly pretty.
>>>
>>>The ll_rw_blk.c merging is easy, since you don't need to do anything
>>>there. It should test against ->max_sectors as it already does, since
>>>this (sadly) is still the primary way we build large ios.
>>>
>>
>>.... here, I am running into a problem. Basically, as you know the
>>largest BIO we can make is 1 MB due to BIO_MAX_PAGES, and for st and sg
>>we need to support commands around 6 MB, so we would have a request with
>> 6 BIOs. To make this monster request I wanted to use the block layer
>>functions and do something like this:
>>
>>+ for (i = 0; i < nsegs; i++) {
>>+ bio = bio_map_pages(q, sg[i].page, sg[i].length,
>>sg[i].offset, gfp);
>>+ if (IS_ERR(bio)) {
>>+ err = PTR_ERR(bio);
>>+ goto free_bios;
>>+ }
>>+ len += sg[i].length;
>>+
>>+ bio->bi_flags &= ~(1 << BIO_SEG_VALID);
>>+ if (rq_data_dir(rq) == WRITE)
>>+ bio->bi_rw |= (1 << BIO_RW);
>>+ blk_queue_bounce(q, &bio);
>>+
>>+ if (i == 0)
>>+ blk_rq_bio_prep(q, rq, bio);
>>
>>/* hope to carve out the __make_request code that does the below
>>operations and make a fucntion that can be shared */
>>
>>+ else if (!q->back_merge_fn(q, rq, bio)) {
>>+ err = -EINVAL;
>>+ bio_endio(bio, bio->bi_size, 0);
>>+ goto free_bios;
>>+ } else {
>>+ rq->biotail->bi_next = bio;
>>+ rq->biotail = bio;
>>+ rq->hard_nr_sectors += bio_sectors(bio);
>>+ rq->nr_sectors = rq->hard_nr_sectors;
>>
>>........
>>
>>But since q->back_merge_fn() tests against q->max_sectors, it must be a
>>high value so that we can merge in those BIOs. I mean if q->max_sectors
>>is some reasonable number like only 1024 sectors, q->back_merge_fn will
>>return a failure. Should I instead seperate ll_back_merge_fn into two
>>functions, one that checks the sectors and one that checks the segments
>>or if ll_back_merge_fn tested for max_hw_sectors we would be ok too?
>
>
> It will take a whole lot of factoring out to make this happen I fear,
> the results will not be easier for the eyes.
>
> How about just keeping it simple - add a bio and request flag that
> basically just says "don't honor soft max size" and whenever that is
> set, you check for ->max_hw_sectors instead of ->max_sectors? Might even
> be enough to just do this for the request and just require stuffing more
> bio's into the request. But the bio has plenty of flag room left, so...
>
ok this works, thanks.
next prev parent reply other threads:[~2005-11-04 18:27 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-10-23 4:33 [PATCH RFC] use struct scsi_lun in generic code Jeff Garzik
2005-10-23 4:49 ` [PATCH RFC] more struct scsi_lun Jeff Garzik
2005-10-23 9:47 ` Stefan Richter
2005-10-23 13:14 ` Stefan Richter
2005-10-23 16:49 ` Jeff Garzik
2005-10-24 16:27 ` Luben Tuikov
2005-10-24 20:03 ` Stefan Richter
2005-10-24 20:10 ` Luben Tuikov
2005-10-24 20:28 ` Mark Rustad
2005-10-24 22:27 ` Douglas Gilbert
2005-10-23 5:20 ` [PATCH RFC] use struct scsi_lun in generic code Jeff Garzik
2005-10-23 5:22 ` Douglas Gilbert
2005-10-23 7:01 ` Jeff Garzik
2005-10-24 14:55 ` Luben Tuikov
2005-10-23 7:00 ` [PATCH RFC] yet more struct scsi_lun Jeff Garzik
2005-10-23 10:48 ` Douglas Gilbert
2005-10-23 11:53 ` Arjan van de Ven
2005-10-23 14:27 ` max_sectors [was Re: [PATCH RFC] yet more struct scsi_lun] Douglas Gilbert
2005-10-23 14:42 ` Arjan van de Ven
2005-10-23 16:44 ` [PATCH RFC] yet more struct scsi_lun Jeff Garzik
2005-10-23 16:43 ` Jeff Garzik
2005-10-23 18:53 ` Kai Makisara
2005-10-24 7:59 ` Jens Axboe
2005-10-28 17:24 ` Mike Christie
2005-10-31 10:24 ` Jens Axboe
2005-11-04 2:23 ` Mike Christie
2005-11-04 2:25 ` Mike Christie
2005-11-04 7:37 ` Jens Axboe
2005-11-04 17:27 ` Mike Christie [this message]
2005-10-23 7:16 ` [PATCH RFC] even " Jeff Garzik
2005-10-24 15:27 ` [PATCH RFC] use struct scsi_lun in generic code Patrick Mansfield
2005-10-24 22:40 ` Douglas Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=436B9A11.3040704@cs.wisc.edu \
--to=michaelc@cs.wisc.edu \
--cc=axboe@suse.de \
--cc=dougg@torque.net \
--cc=jgarzik@pobox.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).