* [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page()
@ 2021-11-12 2:22 Qu Wenruo
2021-11-12 4:17 ` Josef Bacik
2021-11-12 4:26 ` Omar Sandoval
0 siblings, 2 replies; 5+ messages in thread
From: Qu Wenruo @ 2021-11-12 2:22 UTC (permalink / raw)
To: linux-btrfs; +Cc: Omar Sandoval
[BUG]
The following script can cause btrfs to crash:
mount -o compress-force=lzo $DEV /mnt
dd if=/dev/urandom of=/mnt/foo bs=4k count=1
sync
The calltrace looks like this:
general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI
CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4
Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
RIP: 0010:__memcpy+0x12/0x20
Call Trace:
lzo_compress_pages+0x236/0x540 [btrfs]
btrfs_compress_pages+0xaa/0xf0 [btrfs]
compress_file_range+0x431/0x8e0 [btrfs]
async_cow_start+0x12/0x30 [btrfs]
btrfs_work_helper+0xf6/0x3e0 [btrfs]
process_one_work+0x294/0x5d0
worker_thread+0x55/0x3c0
kthread+0x140/0x170
ret_from_fork+0x22/0x30
---[ end trace 63c3c0f131e61982 ]---
[CAUSE]
In lzo_compress_pages(), parameter @out_pages is not only an output
parameter (for the compressed pages), but also an input parameter, for
the maximum amount of pages we can utilize.
In commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
compatible"), the refactor doesn't take @out_pages as an input, thus
completely ignoring the limit.
And for compress-force case, we could hit incompressible data that
compressed size would go beyond the page limit, and cause above crash.
[FIX]
Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(),
and check if we're beyond the limit before accessing the pages.
Reported-by: Omar Sandoval <osandov@fb.com>
Fixes: d4088803f511 ("btrfs: subpage: make lzo_compress_pages() compatible")
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/lzo.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 00cffc183ec0..f410ceabcdbd 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -125,6 +125,7 @@ static inline size_t read_compress_length(const char *buf)
static int copy_compressed_data_to_page(char *compressed_data,
size_t compressed_size,
struct page **out_pages,
+ unsigned long max_nr_page,
u32 *cur_out,
const u32 sectorsize)
{
@@ -132,6 +133,9 @@ static int copy_compressed_data_to_page(char *compressed_data,
u32 orig_out;
struct page *cur_page;
+ if ((*cur_out / PAGE_SIZE) >= max_nr_page)
+ return -E2BIG;
+
/*
* We never allow a segment header crossing sector boundary, previous
* run should ensure we have enough space left inside the sector.
@@ -158,6 +162,9 @@ static int copy_compressed_data_to_page(char *compressed_data,
u32 copy_len = min_t(u32, sectorsize - *cur_out % sectorsize,
orig_out + compressed_size - *cur_out);
+ if ((*cur_out / PAGE_SIZE) >= max_nr_page)
+ return -E2BIG;
+
cur_page = out_pages[*cur_out / PAGE_SIZE];
/* Allocate a new page */
if (!cur_page) {
@@ -195,6 +202,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
struct workspace *workspace = list_entry(ws, struct workspace, list);
const u32 sectorsize = btrfs_sb(mapping->host->i_sb)->sectorsize;
struct page *page_in = NULL;
+ const unsigned long max_nr_page = *out_pages;
int ret = 0;
/* Points to the file offset of input data */
u64 cur_in = start;
@@ -202,6 +210,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
u32 cur_out = 0;
u32 len = *total_out;
+ ASSERT(max_nr_page > 0);
*out_pages = 0;
*total_out = 0;
*total_in = 0;
@@ -237,7 +246,8 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
}
ret = copy_compressed_data_to_page(workspace->cbuf, out_len,
- pages, &cur_out, sectorsize);
+ pages, max_nr_page,
+ &cur_out, sectorsize);
if (ret < 0)
goto out;
--
2.33.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page()
2021-11-12 2:22 [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page() Qu Wenruo
@ 2021-11-12 4:17 ` Josef Bacik
2021-11-12 4:41 ` Qu Wenruo
2021-11-12 4:26 ` Omar Sandoval
1 sibling, 1 reply; 5+ messages in thread
From: Josef Bacik @ 2021-11-12 4:17 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs, Omar Sandoval
On Fri, Nov 12, 2021 at 10:22:53AM +0800, Qu Wenruo wrote:
> [BUG]
> The following script can cause btrfs to crash:
>
> mount -o compress-force=lzo $DEV /mnt
> dd if=/dev/urandom of=/mnt/foo bs=4k count=1
> sync
>
> The calltrace looks like this:
>
> general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI
> CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4
> Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
> RIP: 0010:__memcpy+0x12/0x20
> Call Trace:
> lzo_compress_pages+0x236/0x540 [btrfs]
> btrfs_compress_pages+0xaa/0xf0 [btrfs]
> compress_file_range+0x431/0x8e0 [btrfs]
> async_cow_start+0x12/0x30 [btrfs]
> btrfs_work_helper+0xf6/0x3e0 [btrfs]
> process_one_work+0x294/0x5d0
> worker_thread+0x55/0x3c0
> kthread+0x140/0x170
> ret_from_fork+0x22/0x30
> ---[ end trace 63c3c0f131e61982 ]---
>
> [CAUSE]
> In lzo_compress_pages(), parameter @out_pages is not only an output
> parameter (for the compressed pages), but also an input parameter, for
> the maximum amount of pages we can utilize.
>
> In commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
> compatible"), the refactor doesn't take @out_pages as an input, thus
> completely ignoring the limit.
>
> And for compress-force case, we could hit incompressible data that
> compressed size would go beyond the page limit, and cause above crash.
>
> [FIX]
> Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(),
> and check if we're beyond the limit before accessing the pages.
>
> Reported-by: Omar Sandoval <osandov@fb.com>
> Fixes: d4088803f511 ("btrfs: subpage: make lzo_compress_pages() compatible")
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/lzo.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
> index 00cffc183ec0..f410ceabcdbd 100644
> --- a/fs/btrfs/lzo.c
> +++ b/fs/btrfs/lzo.c
> @@ -125,6 +125,7 @@ static inline size_t read_compress_length(const char *buf)
> static int copy_compressed_data_to_page(char *compressed_data,
> size_t compressed_size,
> struct page **out_pages,
> + unsigned long max_nr_page,
If you want to do const down below you should use const here probably? Thanks,
Josef
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page()
2021-11-12 2:22 [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page() Qu Wenruo
2021-11-12 4:17 ` Josef Bacik
@ 2021-11-12 4:26 ` Omar Sandoval
1 sibling, 0 replies; 5+ messages in thread
From: Omar Sandoval @ 2021-11-12 4:26 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs, Omar Sandoval
On Fri, Nov 12, 2021 at 10:22:53AM +0800, Qu Wenruo wrote:
> [BUG]
> The following script can cause btrfs to crash:
>
> mount -o compress-force=lzo $DEV /mnt
> dd if=/dev/urandom of=/mnt/foo bs=4k count=1
> sync
>
> The calltrace looks like this:
>
> general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI
> CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4
> Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
> RIP: 0010:__memcpy+0x12/0x20
> Call Trace:
> lzo_compress_pages+0x236/0x540 [btrfs]
> btrfs_compress_pages+0xaa/0xf0 [btrfs]
> compress_file_range+0x431/0x8e0 [btrfs]
> async_cow_start+0x12/0x30 [btrfs]
> btrfs_work_helper+0xf6/0x3e0 [btrfs]
> process_one_work+0x294/0x5d0
> worker_thread+0x55/0x3c0
> kthread+0x140/0x170
> ret_from_fork+0x22/0x30
> ---[ end trace 63c3c0f131e61982 ]---
>
> [CAUSE]
> In lzo_compress_pages(), parameter @out_pages is not only an output
> parameter (for the compressed pages), but also an input parameter, for
> the maximum amount of pages we can utilize.
>
> In commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
> compatible"), the refactor doesn't take @out_pages as an input, thus
> completely ignoring the limit.
>
> And for compress-force case, we could hit incompressible data that
> compressed size would go beyond the page limit, and cause above crash.
>
> [FIX]
> Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(),
> and check if we're beyond the limit before accessing the pages.
>
> Reported-by: Omar Sandoval <osandov@fb.com>
> Fixes: d4088803f511 ("btrfs: subpage: make lzo_compress_pages() compatible")
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/lzo.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
This fixed the issue for me, and it looks correct.
Reviewed-by: Omar Sandoval <osandov@fb.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page()
2021-11-12 4:17 ` Josef Bacik
@ 2021-11-12 4:41 ` Qu Wenruo
2021-11-12 14:35 ` David Sterba
0 siblings, 1 reply; 5+ messages in thread
From: Qu Wenruo @ 2021-11-12 4:41 UTC (permalink / raw)
To: Josef Bacik, Qu Wenruo; +Cc: linux-btrfs, Omar Sandoval
On 2021/11/12 12:17, Josef Bacik wrote:
> On Fri, Nov 12, 2021 at 10:22:53AM +0800, Qu Wenruo wrote:
>> [BUG]
>> The following script can cause btrfs to crash:
>>
>> mount -o compress-force=lzo $DEV /mnt
>> dd if=/dev/urandom of=/mnt/foo bs=4k count=1
>> sync
>>
>> The calltrace looks like this:
>>
>> general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI
>> CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4
>> Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
>> RIP: 0010:__memcpy+0x12/0x20
>> Call Trace:
>> lzo_compress_pages+0x236/0x540 [btrfs]
>> btrfs_compress_pages+0xaa/0xf0 [btrfs]
>> compress_file_range+0x431/0x8e0 [btrfs]
>> async_cow_start+0x12/0x30 [btrfs]
>> btrfs_work_helper+0xf6/0x3e0 [btrfs]
>> process_one_work+0x294/0x5d0
>> worker_thread+0x55/0x3c0
>> kthread+0x140/0x170
>> ret_from_fork+0x22/0x30
>> ---[ end trace 63c3c0f131e61982 ]---
>>
>> [CAUSE]
>> In lzo_compress_pages(), parameter @out_pages is not only an output
>> parameter (for the compressed pages), but also an input parameter, for
>> the maximum amount of pages we can utilize.
>>
>> In commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
>> compatible"), the refactor doesn't take @out_pages as an input, thus
>> completely ignoring the limit.
>>
>> And for compress-force case, we could hit incompressible data that
>> compressed size would go beyond the page limit, and cause above crash.
>>
>> [FIX]
>> Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(),
>> and check if we're beyond the limit before accessing the pages.
>>
>> Reported-by: Omar Sandoval <osandov@fb.com>
>> Fixes: d4088803f511 ("btrfs: subpage: make lzo_compress_pages() compatible")
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>> fs/btrfs/lzo.c | 12 +++++++++++-
>> 1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
>> index 00cffc183ec0..f410ceabcdbd 100644
>> --- a/fs/btrfs/lzo.c
>> +++ b/fs/btrfs/lzo.c
>> @@ -125,6 +125,7 @@ static inline size_t read_compress_length(const char *buf)
>> static int copy_compressed_data_to_page(char *compressed_data,
>> size_t compressed_size,
>> struct page **out_pages,
>> + unsigned long max_nr_page,
>
> If you want to do const down below you should use const here probably? Thanks,
Right, max_nr_page should also be const.
Thanks for catching this,
Qu
>
> Josef
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page()
2021-11-12 4:41 ` Qu Wenruo
@ 2021-11-12 14:35 ` David Sterba
0 siblings, 0 replies; 5+ messages in thread
From: David Sterba @ 2021-11-12 14:35 UTC (permalink / raw)
To: Qu Wenruo; +Cc: Josef Bacik, Qu Wenruo, linux-btrfs, Omar Sandoval
On Fri, Nov 12, 2021 at 12:41:37PM +0800, Qu Wenruo wrote:
>
>
> On 2021/11/12 12:17, Josef Bacik wrote:
> > On Fri, Nov 12, 2021 at 10:22:53AM +0800, Qu Wenruo wrote:
> >> [BUG]
> >> The following script can cause btrfs to crash:
> >>
> >> mount -o compress-force=lzo $DEV /mnt
> >> dd if=/dev/urandom of=/mnt/foo bs=4k count=1
> >> sync
> >>
> >> The calltrace looks like this:
> >>
> >> general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI
> >> CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4
> >> Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
> >> RIP: 0010:__memcpy+0x12/0x20
> >> Call Trace:
> >> lzo_compress_pages+0x236/0x540 [btrfs]
> >> btrfs_compress_pages+0xaa/0xf0 [btrfs]
> >> compress_file_range+0x431/0x8e0 [btrfs]
> >> async_cow_start+0x12/0x30 [btrfs]
> >> btrfs_work_helper+0xf6/0x3e0 [btrfs]
> >> process_one_work+0x294/0x5d0
> >> worker_thread+0x55/0x3c0
> >> kthread+0x140/0x170
> >> ret_from_fork+0x22/0x30
> >> ---[ end trace 63c3c0f131e61982 ]---
> >>
> >> [CAUSE]
> >> In lzo_compress_pages(), parameter @out_pages is not only an output
> >> parameter (for the compressed pages), but also an input parameter, for
> >> the maximum amount of pages we can utilize.
> >>
> >> In commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
> >> compatible"), the refactor doesn't take @out_pages as an input, thus
> >> completely ignoring the limit.
> >>
> >> And for compress-force case, we could hit incompressible data that
> >> compressed size would go beyond the page limit, and cause above crash.
> >>
> >> [FIX]
> >> Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(),
> >> and check if we're beyond the limit before accessing the pages.
> >>
> >> Reported-by: Omar Sandoval <osandov@fb.com>
> >> Fixes: d4088803f511 ("btrfs: subpage: make lzo_compress_pages() compatible")
> >> Signed-off-by: Qu Wenruo <wqu@suse.com>
> >> ---
> >> fs/btrfs/lzo.c | 12 +++++++++++-
> >> 1 file changed, 11 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
> >> index 00cffc183ec0..f410ceabcdbd 100644
> >> --- a/fs/btrfs/lzo.c
> >> +++ b/fs/btrfs/lzo.c
> >> @@ -125,6 +125,7 @@ static inline size_t read_compress_length(const char *buf)
> >> static int copy_compressed_data_to_page(char *compressed_data,
> >> size_t compressed_size,
> >> struct page **out_pages,
> >> + unsigned long max_nr_page,
> >
> > If you want to do const down below you should use const here probably? Thanks,
>
> Right, max_nr_page should also be const.
const for non-pointer parameters does not make much sense, it only
prevents reuse of the variable inside the function.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-11-12 14:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-12 2:22 [PATCH] btrfs: fix a out-of-boundary access for copy_compressed_data_to_page() Qu Wenruo
2021-11-12 4:17 ` Josef Bacik
2021-11-12 4:41 ` Qu Wenruo
2021-11-12 14:35 ` David Sterba
2021-11-12 4:26 ` Omar Sandoval
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox