public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Pierre Ossman <drzeus-list@drzeus.cx>
To: Tejun Heo <htejun@gmail.com>
Cc: Jens Axboe <axboe@suse.de>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: IOMMU and scatterlist limits
Date: Tue, 20 Dec 2005 12:36:43 +0100	[thread overview]
Message-ID: <43A7ECCB.2060108@drzeus.cx> (raw)
In-Reply-To: <43A7E692.90103@gmail.com>

Tejun Heo wrote:
> Pierre Ossman wrote:
>> Revisiting a dear old thread. :)
>>
>> After some initial tests, some more questions popped up. See below.
>>
>> Jens Axboe wrote:
>>
>>> On Thu, Nov 17 2005, Pierre Ossman wrote:
>>>  
>>>
>>>> Since there is no guarantee this will be mapped down to one segment
>>>> (that the hardware can accept), is it expected that the driver
>>>> iterates
>>>> over the entire list or can I mark only the first segment as completed
>>>> and wait for the request to be reissued? (this is a MMC driver, which
>>>> behaves like the block layer)
>>>>    
>>>
>>> Ah MMC, that explains a few things :-)
>>>
>>> It's quite legal (and possible) to partially handle a given request,
>>> you
>>> are not obliged to handle a request as a single unit. See how other
>>> block drivers have an end request handling function ala:
>>>
>>>  
>>
>>
>> After testing this it seems the block layer never gives me more than
>> max_hw_segs segments. Is it being clever because I'm compiling for a
>> system without an IOMMU?
>>
>> The hardware should (haven't properly tested this) be able to get new
>> DMA addresses during a transfer. In essence scatter gather with some CPU
>> support. Since I avoid MMC overhead this should give a nice performance
>> boost. But this relies on the block layer giving me more than one
>> segment. Do I need to lie in max_hw_segs to achieve this?
>>
>
> Hi, Pierre.
>
> max_phys_segments: the maximum number of segments in a request
>            *before* DMA mapping
>
> max_hw_segments: the maximum number of segments in a request
>          *after* DMA mapping (ie. after IOMMU merging)
>
> Those maximum numbers are for block layer.  Block layer must not
> exceed above limits when it passes a request downward.  As long as all
> entries in sg are processed, block layer doesn't care whether sg
> iteration is performed by the driver or hardware.
>
> So, if you're gonna perform sg by iterating in the driver, what
> numbers to report for max_phys_segments and max_hw_segments is
> entirely upto how many entries the driver can handle.
>
> Just report some nice number (64 or 128?) for both.  Don't forget that
> the number of sg entries can be decreased after DMA-mapping on
> machines with IOMMU.
>
> IOW, the part which performs sg iteration gets to determine above
> limits.  In your case, the driver is reponsible for both iterations
> (pre and post DMA mapping), so all the limits are upto the driver.
>
>

I'm still a bit confused why the block layer needs to know the maximum
number of hw segments. Different hardware might be connected to
different IOMMU:s, so only the driver will now how much the number can
be reduced. So the block layer should only care about not going above
max_phys_segments, since that's what the driver has room for.

What is the scenario that requires both?

Rgds
Pierre


  reply	other threads:[~2005-12-20 11:36 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-11-17  8:34 IOMMU and scatterlist limits Pierre Ossman
2005-11-17  8:54 ` Jens Axboe
2005-11-17  9:02   ` Pierre Ossman
2005-11-17  9:13     ` Jens Axboe
2005-11-17  9:27       ` Pierre Ossman
2005-11-17  9:38         ` Jens Axboe
2005-11-17  9:49           ` Pierre Ossman
2005-11-17 12:02             ` Jens Axboe
2005-12-18 22:41           ` Pierre Ossman
2005-12-20 11:10             ` Tejun Heo
2005-12-20 11:36               ` Pierre Ossman [this message]
2005-12-20 12:04                 ` Tejun Heo
2005-12-20 12:28                   ` Pierre Ossman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43A7ECCB.2060108@drzeus.cx \
    --to=drzeus-list@drzeus.cx \
    --cc=axboe@suse.de \
    --cc=htejun@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox