* [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce()
@ 2025-07-25 11:27 Hardeep Sharma
2025-07-25 12:04 ` Greg KH
0 siblings, 1 reply; 9+ messages in thread
From: Hardeep Sharma @ 2025-07-25 11:27 UTC (permalink / raw)
To: Jens Axboe, Hannes Reinecke, Martin K . Petersen
Cc: linux-block, linux-kernel, stable, Hardeep Sharma
Buffer bouncing is needed only when memory exists above the lowmem region,
i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >=
max_pfn) was inverted and prevented bouncing when it could actually be
required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled
on 32-bit ARM where not all memory is permanently mapped into the kernel’s
lowmem region.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code")
Cc: stable@vger.kernel.org
Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com>
---
Changelog v1..v2:
* Updated subject line
block/blk.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk.h b/block/blk.h
index 67915b04b3c1..f8a1d64be5a2 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -383,7 +383,7 @@ static inline bool blk_queue_may_bounce(struct request_queue *q)
{
return IS_ENABLED(CONFIG_BOUNCE) &&
q->limits.bounce == BLK_BOUNCE_HIGH &&
- max_low_pfn >= max_pfn;
+ max_low_pfn < max_pfn;
}
static inline struct bio *blk_queue_bounce(struct bio *bio,
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-07-25 11:27 [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() Hardeep Sharma @ 2025-07-25 12:04 ` Greg KH 0 siblings, 0 replies; 9+ messages in thread From: Greg KH @ 2025-07-25 12:04 UTC (permalink / raw) To: Hardeep Sharma Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable On Fri, Jul 25, 2025 at 04:57:10PM +0530, Hardeep Sharma wrote: > Buffer bouncing is needed only when memory exists above the lowmem region, > i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= > max_pfn) was inverted and prevented bouncing when it could actually be > required. > > Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled > on 32-bit ARM where not all memory is permanently mapped into the kernel’s > lowmem region. > > Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") > Cc: stable@vger.kernel.org > Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> > --- > Changelog v1..v2: > > * Updated subject line > > block/blk.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/block/blk.h b/block/blk.h > index 67915b04b3c1..f8a1d64be5a2 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -383,7 +383,7 @@ static inline bool blk_queue_may_bounce(struct request_queue *q) > { > return IS_ENABLED(CONFIG_BOUNCE) && > q->limits.bounce == BLK_BOUNCE_HIGH && > - max_low_pfn >= max_pfn; > + max_low_pfn < max_pfn; > } > > static inline struct bio *blk_queue_bounce(struct bio *bio, > -- > 2.25.1 > > <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce()
@ 2025-08-14 6:36 Hardeep Sharma
2025-08-14 9:03 ` Greg KH
0 siblings, 1 reply; 9+ messages in thread
From: Hardeep Sharma @ 2025-08-14 6:36 UTC (permalink / raw)
To: Jens Axboe, Hannes Reinecke, Martin K . Petersen
Cc: linux-block, linux-kernel, stable, Hardeep Sharma
Buffer bouncing is needed only when memory exists above the lowmem region,
i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >=
max_pfn) was inverted and prevented bouncing when it could actually be
required.
Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled
on 32-bit ARM where not all memory is permanently mapped into the kernel’s
lowmem region.
Branch-Specific Note:
This fix is specific to this branch (6.6.y) only.
In the upstream “tip” kernel, bounce buffer support for highmem pages
was completely removed after kernel version 6.12. Therefore, this
modification is not possible or relevant in the tip branch.
Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code")
Cc: stable@vger.kernel.org
Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com>
---
block/blk.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk.h b/block/blk.h
index 67915b04b3c1..f8a1d64be5a2 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -383,7 +383,7 @@ static inline bool blk_queue_may_bounce(struct request_queue *q)
{
return IS_ENABLED(CONFIG_BOUNCE) &&
q->limits.bounce == BLK_BOUNCE_HIGH &&
- max_low_pfn >= max_pfn;
+ max_low_pfn < max_pfn;
}
static inline struct bio *blk_queue_bounce(struct bio *bio,
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 6:36 Hardeep Sharma @ 2025-08-14 9:03 ` Greg KH 2025-08-14 10:54 ` Hardeep Sharma 0 siblings, 1 reply; 9+ messages in thread From: Greg KH @ 2025-08-14 9:03 UTC (permalink / raw) To: Hardeep Sharma Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote: > Buffer bouncing is needed only when memory exists above the lowmem region, > i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= > max_pfn) was inverted and prevented bouncing when it could actually be > required. > > Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled > on 32-bit ARM where not all memory is permanently mapped into the kernel’s > lowmem region. > > Branch-Specific Note: > > This fix is specific to this branch (6.6.y) only. > In the upstream “tip” kernel, bounce buffer support for highmem pages > was completely removed after kernel version 6.12. Therefore, this > modification is not possible or relevant in the tip branch. > > Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") > Cc: stable@vger.kernel.org > Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> Why do you say this is only for 6.6.y, yet your Fixes: line is older than that? And why wasn't this ever found or noticed before? Also, why can't we just remove all of the bounce buffering code in this older kernel tree? What is wrong with doing that instead? And finally, how was this tested? thanks, greg k-h ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 9:03 ` Greg KH @ 2025-08-14 10:54 ` Hardeep Sharma 2025-08-14 11:36 ` Greg KH 0 siblings, 1 reply; 9+ messages in thread From: Hardeep Sharma @ 2025-08-14 10:54 UTC (permalink / raw) To: Greg KH Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable On 8/14/2025 2:33 PM, Greg KH wrote: > On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote: >> Buffer bouncing is needed only when memory exists above the lowmem region, >> i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= >> max_pfn) was inverted and prevented bouncing when it could actually be >> required. >> >> Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled >> on 32-bit ARM where not all memory is permanently mapped into the kernel’s >> lowmem region. >> >> Branch-Specific Note: >> >> This fix is specific to this branch (6.6.y) only. >> In the upstream “tip” kernel, bounce buffer support for highmem pages >> was completely removed after kernel version 6.12. Therefore, this >> modification is not possible or relevant in the tip branch. >> >> Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") >> Cc: stable@vger.kernel.org >> Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> > > Why do you say this is only for 6.6.y, yet your Fixes: line is older > than that? [Hardeep Sharma]:: Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the Fixes: line. However, we are currently working with kernel 6.6, where we encountered the issue. While it could be merged into 6.12 and then backported to earlier versions, our focus is on addressing it in 6.6.y, where the problem was observed. > > And why wasn't this ever found or noticed before? [Hardeep Sharma] :: This issue remained unnoticed likely because the bounce buffering logic is only triggered under specific hardware and configuration conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and devices requiring DMA from lowmem. Many platforms either do not use highmem or have hardware that does not require bounce buffering, so the bug did not manifest widely. > > Also, why can't we just remove all of the bounce buffering code in this > older kernel tree? What is wrong with doing that instead? [Hardeep Sharma]:: it's too intrusive — I'd need to backport 40+ dependency patches, and I'm unsure about the instability this might introduce in block layer on kernel 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and highmem enabled. So I'd prefer to push just this single tested patch on kernel 6.6 and older affected versions. Removing bounce buffering code from older kernel trees is not feasible for all use cases. Some legacy platforms and drivers still rely on bounce buffering to support DMA operations with highmem pages, especially on 32-bit systems. > > And finally, how was this tested? [Hardeep Sharma]: The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled and a storage device requiring DMA from lowmem.> > thanks, > > greg k-h ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 10:54 ` Hardeep Sharma @ 2025-08-14 11:36 ` Greg KH 2025-08-14 13:06 ` Hardeep Sharma 2025-08-14 13:11 ` Hardeep Sharma 0 siblings, 2 replies; 9+ messages in thread From: Greg KH @ 2025-08-14 11:36 UTC (permalink / raw) To: Hardeep Sharma Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote: > > > On 8/14/2025 2:33 PM, Greg KH wrote: > > On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote: > > > Buffer bouncing is needed only when memory exists above the lowmem region, > > > i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= > > > max_pfn) was inverted and prevented bouncing when it could actually be > > > required. > > > > > > Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled > > > on 32-bit ARM where not all memory is permanently mapped into the kernel’s > > > lowmem region. > > > > > > Branch-Specific Note: > > > > > > This fix is specific to this branch (6.6.y) only. > > > In the upstream “tip” kernel, bounce buffer support for highmem pages > > > was completely removed after kernel version 6.12. Therefore, this > > > modification is not possible or relevant in the tip branch. > > > > > > Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") > > > Cc: stable@vger.kernel.org > > > Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> > > > > Why do you say this is only for 6.6.y, yet your Fixes: line is older > > than that? > [Hardeep Sharma]:: > > Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the > Fixes: line. However, we are currently working with kernel 6.6, where we > encountered the issue. While it could be merged into 6.12 and then > backported to earlier versions, our focus is on addressing it in 6.6.y, > where the problem was observed. For obvious reasons, we can not take a patch only for one older kernel and not a newer (or the older ones if possible), otherwise you will have a regression when you move forward to the new version as you will be doing eventually. So for that reason alone, we can not take this patch, NOR should you want us to. > > And why wasn't this ever found or noticed before? > [Hardeep Sharma] :: Odd quoting, please fix your email client :) > This issue remained unnoticed likely because the bounce buffering logic is > only triggered under specific hardware and configuration > conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and > devices requiring DMA from lowmem. Many platforms either do not use highmem > or have hardware that does not require bounce buffering, so the bug did not > manifest widely. So no one has hit this on any 5.15 or newer devices? I find that really hard to believe given the number of those devices in the world. So what is unique about your platform that you are hitting this and no one else is? > > Also, why can't we just remove all of the bounce buffering code in this > > older kernel tree? What is wrong with doing that instead? > > [Hardeep Sharma]:: > > it's too intrusive — I'd need to backport 40+ dependency patches, and I'm > unsure about the instability this might introduce in block layer on kernel > 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and > highmem enabled. So I'd prefer to push just this single tested patch on > kernel 6.6 and older affected versions. Whenever we take one-off patches, 90% of the time it causes problems, both with the fact that the patch is usually buggy, AND the fact that it now will cause merge conflicts going forward. 40+ patches is nothing in stable patch acceptance, please try that first as you want us to be able to maintain these kernels well for your devices over time, right? So please do that first. Only after proof that that would not work should you even consider a one-off patch. > Removing bounce buffering code from older kernel trees is not feasible for > all use cases. Some legacy platforms and drivers still rely on bounce > buffering to support DMA operations with highmem pages, especially on 32-bit > systems. Then how was it removed in newer kernels at all? Did we just drop support for that hardware? What happens when you move to a newer kernel on your hardware, does it stop working? Based on what I have seen with some Android devices, they seem to work just fine on Linus's tree today, so what is unique about your platform that is going to break and not work anymore? > > And finally, how was this tested? > > [Hardeep Sharma]: > > The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled > and a storage device requiring DMA from lowmem.> So this is for a 32bit ARM system only? Not 64bit? If so, why is this also being submitted to the Android kernel tree which does not support 32bit ARM at all? And again, does your system not work properly on 6.16? If not, why not fix that first? thanks, greg k-h ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 11:36 ` Greg KH @ 2025-08-14 13:06 ` Hardeep Sharma 2025-08-14 13:54 ` Greg KH 2025-08-14 13:11 ` Hardeep Sharma 1 sibling, 1 reply; 9+ messages in thread From: Hardeep Sharma @ 2025-08-14 13:06 UTC (permalink / raw) To: Greg KH Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable This change to blk_queue_may_bounce() in block/blk.h will only affect systems with the following configuration: 1. 32-bit ARM architecture 2. Physical DDR memory greater than 1GB 3. CONFIG_HIGHMEM enabled 4. Virtual memory split of 1GB for kernel and 3GB for userspace Under these conditions, the logic for buffer bouncing is relevant because the kernel may need to handle memory above the low memory threshold, which is typical for highmem-enabled 32-bit systems with large RAM. On other architectures or configurations, this code path will not be exercised. On 8/14/2025 5:06 PM, Greg KH wrote: > On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote: >> >> >> On 8/14/2025 2:33 PM, Greg KH wrote: >>> On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote: >>>> Buffer bouncing is needed only when memory exists above the lowmem region, >>>> i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= >>>> max_pfn) was inverted and prevented bouncing when it could actually be >>>> required. >>>> >>>> Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled >>>> on 32-bit ARM where not all memory is permanently mapped into the kernel’s >>>> lowmem region. >>>> >>>> Branch-Specific Note: >>>> >>>> This fix is specific to this branch (6.6.y) only. >>>> In the upstream “tip” kernel, bounce buffer support for highmem pages >>>> was completely removed after kernel version 6.12. Therefore, this >>>> modification is not possible or relevant in the tip branch. >>>> >>>> Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") >>>> Cc: stable@vger.kernel.org >>>> Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> >>> >>> Why do you say this is only for 6.6.y, yet your Fixes: line is older >>> than that? >> [Hardeep Sharma]:: >> >> Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the >> Fixes: line. However, we are currently working with kernel 6.6, where we >> encountered the issue. While it could be merged into 6.12 and then >> backported to earlier versions, our focus is on addressing it in 6.6.y, >> where the problem was observed. > > For obvious reasons, we can not take a patch only for one older kernel > and not a newer (or the older ones if possible), otherwise you will have > a regression when you move forward to the new version as you will be > doing eventually. > > So for that reason alone, we can not take this patch, NOR should you > want us to. > >>> And why wasn't this ever found or noticed before? >> [Hardeep Sharma] :: > > Odd quoting, please fix your email client :) > >> This issue remained unnoticed likely because the bounce buffering logic is >> only triggered under specific hardware and configuration >> conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and >> devices requiring DMA from lowmem. Many platforms either do not use highmem >> or have hardware that does not require bounce buffering, so the bug did not >> manifest widely. > > So no one has hit this on any 5.15 or newer devices? I find that really > hard to believe given the number of those devices in the world. So what > is unique about your platform that you are hitting this and no one else > is? > >>> Also, why can't we just remove all of the bounce buffering code in this >>> older kernel tree? What is wrong with doing that instead? >> >> [Hardeep Sharma]:: >> >> it's too intrusive — I'd need to backport 40+ dependency patches, and I'm >> unsure about the instability this might introduce in block layer on kernel >> 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and >> highmem enabled. So I'd prefer to push just this single tested patch on >> kernel 6.6 and older affected versions. > > Whenever we take one-off patches, 90% of the time it causes problems, > both with the fact that the patch is usually buggy, AND the fact that it > now will cause merge conflicts going forward. 40+ patches is nothing in > stable patch acceptance, please try that first as you want us to be able > to maintain these kernels well for your devices over time, right? > > So please do that first. Only after proof that that would not work > should you even consider a one-off patch. > >> Removing bounce buffering code from older kernel trees is not feasible for >> all use cases. Some legacy platforms and drivers still rely on bounce >> buffering to support DMA operations with highmem pages, especially on 32-bit >> systems. > > Then how was it removed in newer kernels at all? Did we just drop > support for that hardware? What happens when you move to a newer kernel > on your hardware, does it stop working? Based on what I have seen with > some Android devices, they seem to work just fine on Linus's tree today, > so what is unique about your platform that is going to break and not > work anymore? > >>> And finally, how was this tested? >> >> [Hardeep Sharma]: >> >> The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled >> and a storage device requiring DMA from lowmem.> > > So this is for a 32bit ARM system only? Not 64bit? If so, why is this > also being submitted to the Android kernel tree which does not support > 32bit ARM at all? > > And again, does your system not work properly on 6.16? If not, why not > fix that first? > > thanks, > > greg k-h ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 13:06 ` Hardeep Sharma @ 2025-08-14 13:54 ` Greg KH 0 siblings, 0 replies; 9+ messages in thread From: Greg KH @ 2025-08-14 13:54 UTC (permalink / raw) To: Hardeep Sharma Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable A: http://en.wikipedia.org/wiki/Top_post Q: Were do I find info about this thing called top-posting? A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing in e-mail? A: No. Q: Should I include quotations after my reply? http://daringfireball.net/2007/07/on_top On Thu, Aug 14, 2025 at 06:36:29PM +0530, Hardeep Sharma wrote: > This change to blk_queue_may_bounce() in block/blk.h will only affect > systems with the following configuration: > > 1. 32-bit ARM architecture > 2. Physical DDR memory greater than 1GB > 3. CONFIG_HIGHMEM enabled > 4. Virtual memory split of 1GB for kernel and 3GB for userspace > > Under these conditions, the logic for buffer bouncing is relevant because > the kernel may need to handle memory above the low memory threshold, which > is typical for highmem-enabled 32-bit systems with large RAM. On other > architectures or configurations, this code path will not be exercised. You did not answer most of the questions I asked for some reason :( ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() 2025-08-14 11:36 ` Greg KH 2025-08-14 13:06 ` Hardeep Sharma @ 2025-08-14 13:11 ` Hardeep Sharma 1 sibling, 0 replies; 9+ messages in thread From: Hardeep Sharma @ 2025-08-14 13:11 UTC (permalink / raw) To: Greg KH Cc: Jens Axboe, Hannes Reinecke, Martin K . Petersen, linux-block, linux-kernel, stable This change to blk_queue_may_bounce() in block/blk.h will only affect systems with the following configuration: 1. 32-bit ARM architecture 2. Physical DDR memory greater than or equal to 1GB (greater than lowmem region ) 3. CONFIG_HIGHMEM enabled 4. Virtual memory split of 1GB for kernel and 3GB for userspace OR when we cannot map all physical address in lowmem region Under these conditions, the logic for buffer bouncing is relevant because the kernel may need to handle memory above the low memory threshold, which is typical for highmem-enabled 32-bit systems with large RAM. On other architectures or configurations, this code path will not be exercised. On 8/14/2025 5:06 PM, Greg KH wrote: > On Thu, Aug 14, 2025 at 04:24:25PM +0530, Hardeep Sharma wrote: >> >> >> On 8/14/2025 2:33 PM, Greg KH wrote: >>> On Thu, Aug 14, 2025 at 12:06:55PM +0530, Hardeep Sharma wrote: >>>> Buffer bouncing is needed only when memory exists above the lowmem region, >>>> i.e., when max_low_pfn < max_pfn. The previous check (max_low_pfn >= >>>> max_pfn) was inverted and prevented bouncing when it could actually be >>>> required. >>>> >>>> Note that bouncing depends on CONFIG_HIGHMEM, which is typically enabled >>>> on 32-bit ARM where not all memory is permanently mapped into the kernel’s >>>> lowmem region. >>>> >>>> Branch-Specific Note: >>>> >>>> This fix is specific to this branch (6.6.y) only. >>>> In the upstream “tip” kernel, bounce buffer support for highmem pages >>>> was completely removed after kernel version 6.12. Therefore, this >>>> modification is not possible or relevant in the tip branch. >>>> >>>> Fixes: 9bb33f24abbd0 ("block: refactor the bounce buffering code") >>>> Cc: stable@vger.kernel.org >>>> Signed-off-by: Hardeep Sharma <quic_hardshar@quicinc.com> >>> >>> Why do you say this is only for 6.6.y, yet your Fixes: line is older >>> than that? >> [Hardeep Sharma]:: >> >> Yes, the original commit was merged in kernel 5.13-rc1, as indicated by the >> Fixes: line. However, we are currently working with kernel 6.6, where we >> encountered the issue. While it could be merged into 6.12 and then >> backported to earlier versions, our focus is on addressing it in 6.6.y, >> where the problem was observed. > > For obvious reasons, we can not take a patch only for one older kernel > and not a newer (or the older ones if possible), otherwise you will have > a regression when you move forward to the new version as you will be > doing eventually. > > So for that reason alone, we can not take this patch, NOR should you > want us to. > >>> And why wasn't this ever found or noticed before? >> [Hardeep Sharma] :: > > Odd quoting, please fix your email client :) > >> This issue remained unnoticed likely because the bounce buffering logic is >> only triggered under specific hardware and configuration >> conditions—primarily on 32-bit ARM systems with CONFIG_HIGHMEM enabled and >> devices requiring DMA from lowmem. Many platforms either do not use highmem >> or have hardware that does not require bounce buffering, so the bug did not >> manifest widely. > > So no one has hit this on any 5.15 or newer devices? I find that really > hard to believe given the number of those devices in the world. So what > is unique about your platform that you are hitting this and no one else > is? > >>> Also, why can't we just remove all of the bounce buffering code in this >>> older kernel tree? What is wrong with doing that instead? >> >> [Hardeep Sharma]:: >> >> it's too intrusive — I'd need to backport 40+ dependency patches, and I'm >> unsure about the instability this might introduce in block layer on kernel >> 6.6. Plus, we don't know if it'll work reliably on 32-bit with 1GB+ DDR and >> highmem enabled. So I'd prefer to push just this single tested patch on >> kernel 6.6 and older affected versions. > > Whenever we take one-off patches, 90% of the time it causes problems, > both with the fact that the patch is usually buggy, AND the fact that it > now will cause merge conflicts going forward. 40+ patches is nothing in > stable patch acceptance, please try that first as you want us to be able > to maintain these kernels well for your devices over time, right? > > So please do that first. Only after proof that that would not work > should you even consider a one-off patch. > >> Removing bounce buffering code from older kernel trees is not feasible for >> all use cases. Some legacy platforms and drivers still rely on bounce >> buffering to support DMA operations with highmem pages, especially on 32-bit >> systems. > > Then how was it removed in newer kernels at all? Did we just drop > support for that hardware? What happens when you move to a newer kernel > on your hardware, does it stop working? Based on what I have seen with > some Android devices, they seem to work just fine on Linus's tree today, > so what is unique about your platform that is going to break and not > work anymore? > >>> And finally, how was this tested? >> >> [Hardeep Sharma]: >> >> The patch was tested on a 32-bit ARM platform with CONFIG_HIGHMEM enabled >> and a storage device requiring DMA from lowmem.> > > So this is for a 32bit ARM system only? Not 64bit? If so, why is this > also being submitted to the Android kernel tree which does not support > 32bit ARM at all? > > And again, does your system not work properly on 6.16? If not, why not > fix that first? > > thanks, > > greg k-h ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-08-14 13:54 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-07-25 11:27 [PATCH 6.6.y v2 1/1] block: Fix bounce check logic in blk_queue_may_bounce() Hardeep Sharma 2025-07-25 12:04 ` Greg KH -- strict thread matches above, loose matches on Subject: below -- 2025-08-14 6:36 Hardeep Sharma 2025-08-14 9:03 ` Greg KH 2025-08-14 10:54 ` Hardeep Sharma 2025-08-14 11:36 ` Greg KH 2025-08-14 13:06 ` Hardeep Sharma 2025-08-14 13:54 ` Greg KH 2025-08-14 13:11 ` Hardeep Sharma
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox