linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
@ 2010-09-21 22:22 Malahal Naineni
  2010-09-22 23:06 ` Malahal Naineni
  2010-09-24 13:58 ` Jens Axboe
  0 siblings, 2 replies; 16+ messages in thread
From: Malahal Naineni @ 2010-09-21 22:22 UTC (permalink / raw)
  To: jaxboe, dm-devel, linux-scsi

The bounce_pfn of the request queue in 64 bit systems is set to the
current max_low_pfn. Adding more memory later makes this incorrect.
Memory allocated beyond this boot time max_low_pfn appear to require
bounce buffers (bounce buffers are actually not allocated but used in
calculating segments that may result in "over max segments limit"
errors).

Signed-off-by: Malahal Naineni (malahal@us.ibm.com)

diff -r 09daf852c1c5 -r c9516154fabc block/blk-settings.c
--- a/block/blk-settings.c	Thu Sep 09 12:10:43 2010 -0700
+++ b/block/blk-settings.c	Mon Sep 13 10:15:24 2010 -0700
@@ -214,16 +214,14 @@ void blk_queue_bounce_limit(struct reque
 	 */
 	if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
 		dma = 1;
-	q->limits.bounce_pfn = max_low_pfn;
 #else
 	if (b_pfn < blk_max_low_pfn)
 		dma = 1;
+#endif
 	q->limits.bounce_pfn = b_pfn;
-#endif
 	if (dma) {
 		init_emergency_isa_pool();
 		q->limits.bounce_gfp = GFP_NOIO | GFP_DMA;
-		q->limits.bounce_pfn = b_pfn;
 	}
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-21 22:22 Malahal Naineni
@ 2010-09-22 23:06 ` Malahal Naineni
  2010-09-24 13:58 ` Jens Axboe
  1 sibling, 0 replies; 16+ messages in thread
From: Malahal Naineni @ 2010-09-22 23:06 UTC (permalink / raw)
  To: jaxboe, dm-devel, linux-scsi

Jens, any comments on this patch?

Thanks, Malahal.

Malahal Naineni [malahal@us.ibm.com] wrote:
> The bounce_pfn of the request queue in 64 bit systems is set to the
> current max_low_pfn. Adding more memory later makes this incorrect.
> Memory allocated beyond this boot time max_low_pfn appear to require
> bounce buffers (bounce buffers are actually not allocated but used in
> calculating segments that may result in "over max segments limit"
> errors).
> 
> Signed-off-by: Malahal Naineni (malahal@us.ibm.com)
> 
> diff -r 09daf852c1c5 -r c9516154fabc block/blk-settings.c
> --- a/block/blk-settings.c	Thu Sep 09 12:10:43 2010 -0700
> +++ b/block/blk-settings.c	Mon Sep 13 10:15:24 2010 -0700
> @@ -214,16 +214,14 @@ void blk_queue_bounce_limit(struct reque
>  	 */
>  	if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
>  		dma = 1;
> -	q->limits.bounce_pfn = max_low_pfn;
>  #else
>  	if (b_pfn < blk_max_low_pfn)
>  		dma = 1;
> +#endif
>  	q->limits.bounce_pfn = b_pfn;
> -#endif
>  	if (dma) {
>  		init_emergency_isa_pool();
>  		q->limits.bounce_gfp = GFP_NOIO | GFP_DMA;
> -		q->limits.bounce_pfn = b_pfn;
>  	}
>  }
>  EXPORT_SYMBOL(blk_queue_bounce_limit);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-21 22:22 Malahal Naineni
  2010-09-22 23:06 ` Malahal Naineni
@ 2010-09-24 13:58 ` Jens Axboe
  2010-09-24 17:05   ` Malahal Naineni
  1 sibling, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2010-09-24 13:58 UTC (permalink / raw)
  To: Malahal Naineni; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

On 2010-09-22 00:22, Malahal Naineni wrote:
> The bounce_pfn of the request queue in 64 bit systems is set to the
> current max_low_pfn. Adding more memory later makes this incorrect.

Clearly correct.

> Memory allocated beyond this boot time max_low_pfn appear to require
> bounce buffers (bounce buffers are actually not allocated but used in
> calculating segments that may result in "over max segments limit"
> errors).

But I can't quite convince myself that the change is fully correct. You
don't really explain in your own words what the patch does, just what it
fixes.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-24 13:58 ` Jens Axboe
@ 2010-09-24 17:05   ` Malahal Naineni
  2010-09-24 18:28     ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-09-24 17:05 UTC (permalink / raw)
  To: Jens Axboe; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

Jens Axboe [jaxboe@fusionio.com] wrote:
> On 2010-09-22 00:22, Malahal Naineni wrote:
> > The bounce_pfn of the request queue in 64 bit systems is set to the
> > current max_low_pfn. Adding more memory later makes this incorrect.
> 
> Clearly correct.
> 
> > Memory allocated beyond this boot time max_low_pfn appear to require
> > bounce buffers (bounce buffers are actually not allocated but used in
> > calculating segments that may result in "over max segments limit"
> > errors).
> 
> But I can't quite convince myself that the change is fully correct. You
> don't really explain in your own words what the patch does, just what it
> fixes.

OK, the problem is we get "over max segments limit" errors from
blk_rq_check_limits() after adding memory. The actual bug is in
blk_phys_contig_segment() where it doesn't check for possibility of
bounce buffers. This results in merging more requests in
ll_merge_requests_fn(). Later, blk_recalc_rq_segments() call from
blk_rq_check_limits() actually uses the possibility of bounce buffers,
so the calculated number of segments exceed q's max_segments resulting
in the above error.

Fix for the actual problem is posted here:
http://permalink.gmane.org/gmane.linux.kernel.device-mapper.devel/12426

So clearly the bug should manifest only when 'bounce buffers' are
involved!  Applying the above patch indeed_fixed_ the problem, but I
know there shouldn't be any need for bounce buffers on our system, and
further investigation revealed that DMA limit is not set correctly.

This patch also _fixed_ our problem. So we are fine with either patch,
but this patch is preferred as it enables more request merges. Also,
both patches maybe needed for some configurations.

Thanks, Malahal.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-24 17:05   ` Malahal Naineni
@ 2010-09-24 18:28     ` Jens Axboe
  2010-09-24 19:20       ` Malahal Naineni
  0 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2010-09-24 18:28 UTC (permalink / raw)
  To: Malahal Naineni; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

On 2010-09-24 19:05, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>> On 2010-09-22 00:22, Malahal Naineni wrote:
>>> The bounce_pfn of the request queue in 64 bit systems is set to the
>>> current max_low_pfn. Adding more memory later makes this incorrect.
>>
>> Clearly correct.
>>
>>> Memory allocated beyond this boot time max_low_pfn appear to require
>>> bounce buffers (bounce buffers are actually not allocated but used in
>>> calculating segments that may result in "over max segments limit"
>>> errors).
>>
>> But I can't quite convince myself that the change is fully correct. You
>> don't really explain in your own words what the patch does, just what it
>> fixes.
> 
> OK, the problem is we get "over max segments limit" errors from
> blk_rq_check_limits() after adding memory. The actual bug is in
> blk_phys_contig_segment() where it doesn't check for possibility of
> bounce buffers. This results in merging more requests in
> ll_merge_requests_fn(). Later, blk_recalc_rq_segments() call from
> blk_rq_check_limits() actually uses the possibility of bounce buffers,
> so the calculated number of segments exceed q's max_segments resulting
> in the above error.
> 
> Fix for the actual problem is posted here:
> http://permalink.gmane.org/gmane.linux.kernel.device-mapper.devel/12426
> 
> So clearly the bug should manifest only when 'bounce buffers' are
> involved!  Applying the above patch indeed_fixed_ the problem, but I
> know there shouldn't be any need for bounce buffers on our system, and
> further investigation revealed that DMA limit is not set correctly.
> 
> This patch also _fixed_ our problem. So we are fine with either patch,
> but this patch is preferred as it enables more request merges. Also,
> both patches maybe needed for some configurations.

Plus it doesn't needlessly bounce, that's the real problem you want to
fix. I have applied this thread patch to for-2.6.37/core, thanks.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-24 18:28     ` Jens Axboe
@ 2010-09-24 19:20       ` Malahal Naineni
  2010-09-24 19:26         ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-09-24 19:20 UTC (permalink / raw)
  To: Jens Axboe; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

Jens Axboe [jaxboe@fusionio.com] wrote:
> > This patch also _fixed_ our problem. So we are fine with either patch,
> > but this patch is preferred as it enables more request merges. Also,
> > both patches maybe needed for some configurations.
> 
> Plus it doesn't needlessly bounce, that's the real problem you want to
> fix. I have applied this thread patch to for-2.6.37/core, thanks.

There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
return without doing anything.  blk_max_pfn is not updated when we do
hot-plug memory add, so bounce buffers are NOT really used in our case
(thankfully)! 

Here is the code that may need some fix in future:

        if (!(q->limits.bounce_gfp & GFP_DMA)) {
		if (queue_bounce_pfn(q) >= blk_max_pfn)
			return;


Thank you so much for applying this patch.

Thanks, Malahal.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit   rather than to max memory
  2010-09-24 19:20       ` Malahal Naineni
@ 2010-09-24 19:26         ` Jens Axboe
  2010-10-01  2:30           ` Malahal Naineni
  0 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2010-09-24 19:26 UTC (permalink / raw)
  To: Malahal Naineni; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

On 2010-09-24 21:20, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>>> This patch also _fixed_ our problem. So we are fine with either patch,
>>> but this patch is preferred as it enables more request merges. Also,
>>> both patches maybe needed for some configurations.
>>
>> Plus it doesn't needlessly bounce, that's the real problem you want to
>> fix. I have applied this thread patch to for-2.6.37/core, thanks.
> 
> There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
> return without doing anything.  blk_max_pfn is not updated when we do
> hot-plug memory add, so bounce buffers are NOT really used in our case
> (thankfully)! 

Any reason we can't just add a hot mem add notifier and update the block
copies when we need to?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
@ 2010-09-28 22:13 Luck, Tony
  2010-09-28 22:59 ` Malahal Naineni
  0 siblings, 1 reply; 16+ messages in thread
From: Luck, Tony @ 2010-09-28 22:13 UTC (permalink / raw)
  To: Malahal Naineni; +Cc: Jens Axboe, linux-scsi, linux-next

Starting with the next-20100927 tag of linux-next I saw this error
while booting ia64:

  Unable to handle kernel NULL pointer dereference (address 0000000000000020)
  usb-stor-scan[5915]: Oops 8813272891392 [1]
  Modules linked in: dm_mod usb_storage sg container button usbhid uhci_hcd ehci_hcd usbcore fan processor thermal thermal_sys

  Pid: 5915, CPU 0, comm:        usb-stor-scan
  psr : 00001010085a6010 ifs : 8000000000000894 ip  : [<a00000010012a630>]    Not tainted (2.6.36-rc5-generic-smp-next-20100927)
  ip is at mempool_alloc+0x70/0x200

The problem was that "page_pool" was NULL, but blk_queue_bounce()
had decided to use it!  The code in mm/bounce.c looks quite
fragile here as there are several places where page_pool is used, but
it is only initialized inside #ifdef CONFIG_HIGHMEM (which is not
set on ia64).

Reverting this patch so that the old rules for setting
q->limits.bounce_pfn are used cures the immediate problem.
But I think there must be some deeper issues involved.

I think that reverting means that I take the fast exit
from blk_queue_bounce():

	if (queue_bounce_pfn(q) >= blk_max_pfn)
		return;

which is probably just avoiding the problem, rather than
doing the right thing.

-Tony

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-28 22:13 [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory Luck, Tony
@ 2010-09-28 22:59 ` Malahal Naineni
  2010-09-28 23:40   ` Luck, Tony
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-09-28 22:59 UTC (permalink / raw)
  To: Luck, Tony; +Cc: Jens Axboe, linux-scsi, linux-next

Luck, Tony [tony.luck@intel.com] wrote:
> Starting with the next-20100927 tag of linux-next I saw this error
> while booting ia64:
> 
>   Unable to handle kernel NULL pointer dereference (address 0000000000000020)
>   usb-stor-scan[5915]: Oops 8813272891392 [1]
>   Modules linked in: dm_mod usb_storage sg container button usbhid uhci_hcd ehci_hcd usbcore fan processor thermal thermal_sys
> 
>   Pid: 5915, CPU 0, comm:        usb-stor-scan
>   psr : 00001010085a6010 ifs : 8000000000000894 ip  : [<a00000010012a630>]    Not tainted (2.6.36-rc5-generic-smp-next-20100927)
>   ip is at mempool_alloc+0x70/0x200
> 
> The problem was that "page_pool" was NULL, but blk_queue_bounce()
> had decided to use it!  The code in mm/bounce.c looks quite
> fragile here as there are several places where page_pool is used, but
> it is only initialized inside #ifdef CONFIG_HIGHMEM (which is not
> set on ia64).
> 
> Reverting this patch so that the old rules for setting
> q->limits.bounce_pfn are used cures the immediate problem.
> But I think there must be some deeper issues involved.
> 
> I think that reverting means that I take the fast exit
> from blk_queue_bounce():
> 
> 	if (queue_bounce_pfn(q) >= blk_max_pfn)
> 		return;

Let me know if this fixes the problem. Thank you very much.


diff -r 1a48e21f1e50 drivers/scsi/scsi_lib.c
--- a/drivers/scsi/scsi_lib.c	Fri Sep 24 09:44:52 2010 -0700
+++ b/drivers/scsi/scsi_lib.c	Tue Sep 28 15:55:10 2010 -0700
@@ -1590,7 +1590,7 @@ static void scsi_request_fn(struct reque
 u64 scsi_calculate_bounce_limit(struct Scsi_Host *shost)
 {
 	struct device *host_dev;
-	u64 bounce_limit = 0xffffffff;
+	u64 bounce_limit = BLK_BOUNCE_HIGH;
 
 	if (shost->unchecked_isa_dma)
 		return BLK_BOUNCE_ISA;

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-28 22:59 ` Malahal Naineni
@ 2010-09-28 23:40   ` Luck, Tony
  2010-09-29  0:42     ` Malahal Naineni
  0 siblings, 1 reply; 16+ messages in thread
From: Luck, Tony @ 2010-09-28 23:40 UTC (permalink / raw)
  To: Malahal Naineni
  Cc: Jens Axboe, linux-scsi@vger.kernel.org,
	linux-next@vger.kernel.org

> Let me know if this fixes the problem. Thank you very much.

No it doesn't :-(

I still end up with 0xffffffff from the *host_dev->dma_mask
later in that function.

Just for two devices though ... stack traces lead back to
usb_stor_scan_thread() for these two.

-Tony

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-28 23:40   ` Luck, Tony
@ 2010-09-29  0:42     ` Malahal Naineni
  2010-09-29  4:47       ` Luck, Tony
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-09-29  0:42 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Jens Axboe, linux-scsi@vger.kernel.org,
	linux-next@vger.kernel.org

Luck, Tony [tony.luck@intel.com] wrote:
> > Let me know if this fixes the problem. Thank you very much.
> 
> No it doesn't :-(
> 
> I still end up with 0xffffffff from the *host_dev->dma_mask
> later in that function.
> 
> Just for two devices though ... stack traces lead back to
> usb_stor_scan_thread() for these two.

What is the USB host controller's PCI device and vendor IDs? I see
ehci-ps3.c  and ohci-ps3.c setting it to DMA_BIT_MASK(32). That may
present another bug???

I don't know USB much, but someone seems to be lying that they can't do DMA
beyond 4GB!

How much memory you have on the system? This probably doesn't matter but
good to know.

Thanks, Malahal.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-29  0:42     ` Malahal Naineni
@ 2010-09-29  4:47       ` Luck, Tony
  2010-09-29  5:55         ` Malahal Naineni
  0 siblings, 1 reply; 16+ messages in thread
From: Luck, Tony @ 2010-09-29  4:47 UTC (permalink / raw)
  To: Malahal Naineni
  Cc: Jens Axboe, linux-scsi@vger.kernel.org,
	linux-next@vger.kernel.org

>What is the USB host controller's PCI device and vendor IDs? I see
>ehci-ps3.c  and ohci-ps3.c setting it to DMA_BIT_MASK(32). That may
>present another bug???

I have a pair of ehci devices:

00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 (prog-if 20 [EHCI])
        Subsystem: Platform Technologies, Inc. Device 6249

00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 (prog-if 20 [EHCI])
        Subsystem: Device 2920:0034

>I don't know USB much, but someone seems to be lying that they can't do
>DMA beyond 4GB!

The drivers may be confused about this.  ia64 systems support DMA to
all addresses - either because they have an IOMMU that makes DMA
possible to any address, or because they use swiotlb to pretend
that they can.

You are probably correct that this is the real source of the bug.
Your patch just stopped the block layer from hiding the problem.

>How much memory you have on the system? This probably doesn't matter but
>good to know.

This is a little machine - just 16GB.

-Tony

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-29  4:47       ` Luck, Tony
@ 2010-09-29  5:55         ` Malahal Naineni
  2010-09-29 16:00           ` Luck, Tony
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-09-29  5:55 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Jens Axboe, linux-scsi@vger.kernel.org,
	linux-next@vger.kernel.org

Luck, Tony [tony.luck@intel.com] wrote:
> >What is the USB host controller's PCI device and vendor IDs? I see
> >ehci-ps3.c  and ohci-ps3.c setting it to DMA_BIT_rMASK(32). That may
> >present another bug???
> e
> I have a pair of ehci devices:
> 
> 00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 (prog-if 20 [EHCI])
>         Subsystem: Platform Technologies, Inc. Device 6249
> 
> 00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 (prog-if 20 [EHCI])
>         Subsystem: Device 2920:0034
> 
> >I don't know USB much, but someone seems to be lying that they can't do
> >DMA beyond 4GB!
> 
> The drivers may be confused about this.  ia64 systems support DMA to
> all addresses - either because they have an IOMMU that makes DMA
> possible to any address, or because they use swiotlb to pretend
> that they can.

That makes sense! This patch should work and considers the above
behaviour just as the old code. Please revert the commit and apply this
patch instead. Please let me know if this works.


block: set the bounce_pfn to the actual DMA limit rather than to max memory

The bounce_pfn of the request queue in 64 bit systems is set to the
current max_low_pfn. Adding more memory later makes this incorrect.
Memory allocated beyond this boot time max_low_pfn appear to require
bounce buffers (bounce buffers are actually not allocated but used in
calculating segments that may result in "over max segments limit"
errors).

Signed-off-by: Malahal Naineni (malahal@us.ibm.com)

diff -r 09daf852c1c5 block/blk-settings.c
--- a/block/blk-settings.c	Thu Sep 09 12:10:43 2010 -0700
+++ b/block/blk-settings.c	Tue Sep 28 22:43:40 2010 -0700
@@ -214,7 +214,7 @@ void blk_queue_bounce_limit(struct reque
 	 */
 	if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
 		dma = 1;
-	q->limits.bounce_pfn = max_low_pfn;
+	q->limits.bounce_pfn = max(max_low_pfn, b_pfn);
 #else
 	if (b_pfn < blk_max_low_pfn)
 		dma = 1;

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-29  5:55         ` Malahal Naineni
@ 2010-09-29 16:00           ` Luck, Tony
  0 siblings, 0 replies; 16+ messages in thread
From: Luck, Tony @ 2010-09-29 16:00 UTC (permalink / raw)
  To: Malahal Naineni
  Cc: Jens Axboe, linux-scsi@vger.kernel.org,
	linux-next@vger.kernel.org

> That makes sense! This patch should work and considers the above
> behaviour just as the old code. Please revert the commit and apply this
> patch instead. Please let me know if this works.

Yes, this new version works.

Tested-by: Tony Luck <tony.luck@intel.com>

Thanks

-Tony

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory
  2010-09-24 19:26         ` Jens Axboe
@ 2010-10-01  2:30           ` Malahal Naineni
  2010-10-01 12:46             ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Malahal Naineni @ 2010-10-01  2:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

Jens Axboe [jaxboe@fusionio.com] wrote:
> On 2010-09-24 21:20, Malahal Naineni wrote:
> > Jens Axboe [jaxboe@fusionio.com] wrote:
> >>> This patch also _fixed_ our problem. So we are fine with either patch,
> >>> but this patch is preferred as it enables more request merges. Also,
> >>> both patches maybe needed for some configurations.
> >>
> >> Plus it doesn't needlessly bounce, that's the real problem you want to
> >> fix. I have applied this thread patch to for-2.6.37/core, thanks.
> > 
> > There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
> > return without doing anything.  blk_max_pfn is not updated when we do
> > hot-plug memory add, so bounce buffers are NOT really used in our case
> > (thankfully)! 
> 
> Any reason we can't just add a hot mem add notifier and update the block
> copies when we need to?

Actually it is quite simple, but power arch guys decided not to update
max_pfn and max_low_pfn when someone adds or removes memory. They do
seem to indicate that it is not an over sight! On the other hand, x86
arch does update those.

Another approach would be defining max_possible_pfn (maximum allowable
memory) which would be constant (based on architecture) and hopefully
adapters that we care still take the short-cut.

Also, this patch broke some USB hosts. Can you please back out this
patch and apply the one below instead:

http://thread.gmane.org/gmane.linux.kernel.next/14158

Thank you very much,
Malahal.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] block: set the bounce_pfn to the actual DMA limit  rather than to max memory
  2010-10-01  2:30           ` Malahal Naineni
@ 2010-10-01 12:46             ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2010-10-01 12:46 UTC (permalink / raw)
  To: Malahal Naineni; +Cc: dm-devel@redhat.com, linux-scsi@vger.kernel.org

On 2010-10-01 04:30, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>> On 2010-09-24 21:20, Malahal Naineni wrote:
>>> Jens Axboe [jaxboe@fusionio.com] wrote:
>>>>> This patch also _fixed_ our problem. So we are fine with either patch,
>>>>> but this patch is preferred as it enables more request merges. Also,
>>>>> both patches maybe needed for some configurations.
>>>>
>>>> Plus it doesn't needlessly bounce, that's the real problem you want to
>>>> fix. I have applied this thread patch to for-2.6.37/core, thanks.
>>>
>>> There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
>>> return without doing anything.  blk_max_pfn is not updated when we do
>>> hot-plug memory add, so bounce buffers are NOT really used in our case
>>> (thankfully)! 
>>
>> Any reason we can't just add a hot mem add notifier and update the block
>> copies when we need to?
> 
> Actually it is quite simple, but power arch guys decided not to update
> max_pfn and max_low_pfn when someone adds or removes memory. They do
> seem to indicate that it is not an over sight! On the other hand, x86
> arch does update those.
> 
> Another approach would be defining max_possible_pfn (maximum allowable
> memory) which would be constant (based on architecture) and hopefully
> adapters that we care still take the short-cut.
> 
> Also, this patch broke some USB hosts. Can you please back out this
> patch and apply the one below instead:
> 
> http://thread.gmane.org/gmane.linux.kernel.next/14158

Done, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2010-10-01 12:46 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-28 22:13 [PATCH] block: set the bounce_pfn to the actual DMA limit rather than to max memory Luck, Tony
2010-09-28 22:59 ` Malahal Naineni
2010-09-28 23:40   ` Luck, Tony
2010-09-29  0:42     ` Malahal Naineni
2010-09-29  4:47       ` Luck, Tony
2010-09-29  5:55         ` Malahal Naineni
2010-09-29 16:00           ` Luck, Tony
  -- strict thread matches above, loose matches on Subject: below --
2010-09-21 22:22 Malahal Naineni
2010-09-22 23:06 ` Malahal Naineni
2010-09-24 13:58 ` Jens Axboe
2010-09-24 17:05   ` Malahal Naineni
2010-09-24 18:28     ` Jens Axboe
2010-09-24 19:20       ` Malahal Naineni
2010-09-24 19:26         ` Jens Axboe
2010-10-01  2:30           ` Malahal Naineni
2010-10-01 12:46             ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).