* [f2fs-dev] [PATCH v2 1/3] f2fs: fix to avoid overflow while left shift operation @ 2025-08-04 1:43 Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 2/3] f2fs: clean up f2fs_truncate_partial_cluster() Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly Chao Yu via Linux-f2fs-devel 0 siblings, 2 replies; 5+ messages in thread From: Chao Yu via Linux-f2fs-devel @ 2025-08-04 1:43 UTC (permalink / raw) To: jaegeuk; +Cc: linux-kernel, linux-f2fs-devel Should cast type of folio->index from pgoff_t to loff_t to avoid overflow while left shift operation. Fixes: 3265d3db1f16 ("f2fs: support partial truncation on compressed inode") Signed-off-by: Chao Yu <chao@kernel.org> --- fs/f2fs/compress.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 5c1f47e45dab..6cd8902849cf 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -1245,7 +1245,7 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) for (i = cluster_size - 1; i >= 0; i--) { struct folio *folio = page_folio(rpages[i]); - loff_t start = folio->index << PAGE_SHIFT; + loff_t start = (loff_t)folio->index << PAGE_SHIFT; if (from <= start) { folio_zero_segment(folio, 0, folio_size(folio)); -- 2.49.0 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [f2fs-dev] [PATCH v2 2/3] f2fs: clean up f2fs_truncate_partial_cluster() 2025-08-04 1:43 [f2fs-dev] [PATCH v2 1/3] f2fs: fix to avoid overflow while left shift operation Chao Yu via Linux-f2fs-devel @ 2025-08-04 1:43 ` Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly Chao Yu via Linux-f2fs-devel 1 sibling, 0 replies; 5+ messages in thread From: Chao Yu via Linux-f2fs-devel @ 2025-08-04 1:43 UTC (permalink / raw) To: jaegeuk; +Cc: linux-kernel, linux-f2fs-devel Clean up codes as below: - avoid unnecessary "err > 0" check condition - simply if-else condition in the loop - use "1 << log_cluster_size" instead of F2FS_I(inode)->i_cluster_size No logic changes. Signed-off-by: Chao Yu <chao@kernel.org> --- fs/f2fs/compress.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 6cd8902849cf..e37a7ed801e5 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -1215,9 +1215,11 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) { void *fsdata = NULL; struct page *pagep; + struct page **rpages; int log_cluster_size = F2FS_I(inode)->i_log_cluster_size; pgoff_t start_idx = from >> (PAGE_SHIFT + log_cluster_size) << log_cluster_size; + int i; int err; err = f2fs_is_compressed_cluster(inode, start_idx); @@ -1238,26 +1240,21 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) if (err <= 0) return err; - if (err > 0) { - struct page **rpages = fsdata; - int cluster_size = F2FS_I(inode)->i_cluster_size; - int i; - - for (i = cluster_size - 1; i >= 0; i--) { - struct folio *folio = page_folio(rpages[i]); - loff_t start = (loff_t)folio->index << PAGE_SHIFT; - - if (from <= start) { - folio_zero_segment(folio, 0, folio_size(folio)); - } else { - folio_zero_segment(folio, from - start, - folio_size(folio)); - break; - } - } + rpages = fsdata; + + for (i = (1 << log_cluster_size) - 1; i >= 0; i--) { + struct folio *folio = page_folio(rpages[i]); + loff_t start = (loff_t)folio->index << PAGE_SHIFT; - f2fs_compress_write_end(inode, fsdata, start_idx, true); + if (from > start) { + folio_zero_segment(folio, from - start, + folio_size(folio)); + break; + } + folio_zero_segment(folio, 0, folio_size(folio)); } + + f2fs_compress_write_end(inode, fsdata, start_idx, true); return 0; } -- 2.49.0 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly 2025-08-04 1:43 [f2fs-dev] [PATCH v2 1/3] f2fs: fix to avoid overflow while left shift operation Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 2/3] f2fs: clean up f2fs_truncate_partial_cluster() Chao Yu via Linux-f2fs-devel @ 2025-08-04 1:43 ` Chao Yu via Linux-f2fs-devel 2025-08-05 2:42 ` Jaegeuk Kim via Linux-f2fs-devel 1 sibling, 1 reply; 5+ messages in thread From: Chao Yu via Linux-f2fs-devel @ 2025-08-04 1:43 UTC (permalink / raw) To: jaegeuk; +Cc: linux-kernel, linux-f2fs-devel generic/091 may fail, then it bisects to the bad commit ba8dac350faf ("f2fs: fix to zero post-eof page"). What will cause generic/091 to fail is something like below Testcase #1: 1. write 16k as compressed blocks 2. truncate to 12k 3. truncate to 20k 4. verify data in range of [12k, 16k], however data is not zero as expected Script of Testcase #1 mkfs.f2fs -f -O extra_attr,compression /dev/vdb mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc sync truncate -s $((12*1024)) /mnt/f2fs/file truncate -s $((20*1024)) /mnt/f2fs/file dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 od /mnt/f2fs/data umount /mnt/f2fs Analisys: in step 2), we will redirty all data pages from #0 to #3 in compressed cluster, and zero page #3, in step 3), f2fs_setattr() will call f2fs_zero_post_eof_page() to drop all page cache post eof, includeing dirtied page #3, in step 4) when we read data from page #3, it will decompressed cluster and extra random data to page #3, finally, we hit the non-zeroed data post eof. However, the commit ba8dac350faf ("f2fs: fix to zero post-eof page") just let the issue be reproduced easily, w/o the commit, it can reproduce this bug w/ below Testcase #2: 1. write 16k as compressed blocks 2. truncate to 8k 3. truncate to 12k 4. truncate to 20k 5. verify data in range of [12k, 16k], however data is not zero as expected Script of Testcase #2 mkfs.f2fs -f -O extra_attr,compression /dev/vdb mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc sync truncate -s $((8*1024)) /mnt/f2fs/file truncate -s $((12*1024)) /mnt/f2fs/file truncate -s $((20*1024)) /mnt/f2fs/file echo 3 > /proc/sys/vm/drop_caches dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 od /mnt/f2fs/data umount /mnt/f2fs Anlysis: in step 2), we will redirty all data pages from #0 to #3 in compressed cluster, and zero page #2 and #3, in step 3), we will truncate page #3 in page cache, in step 4), expand file size, in step 5), hit random data post eof w/ the same reason in Testcase #1. Root Cause: In f2fs_truncate_partial_cluster(), after we truncate partial data block on compressed cluster, all pages in cluster including the one post eof will be dirtied, after another tuncation, dirty page post eof will be dropped, however on-disk compressed cluster is still valid, it includes invalid data post eof, result in exposing previous data post eof while reading. Fix: In f2fs_truncate_partial_cluster(), let change as below to fix: - call filemap_write_and_wait_range() to flush dirty page - call truncate_pagecache() to drop pages or zero partial page post eof - call f2fs_do_truncate_blocks() to truncate non-compress cluster to last vali block Fixes: 3265d3db1f16 ("f2fs: support partial truncation on compressed inode") Reported-by: Jan Prusakowski <jprusakowski@google.com> Signed-off-by: Chao Yu <chao@kernel.org> --- v2: - should dirty & flush all pages in cluster and truncate blocks post eof later fs/f2fs/compress.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index e37a7ed801e5..6ad8d3bc6df7 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -1245,17 +1245,25 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) for (i = (1 << log_cluster_size) - 1; i >= 0; i--) { struct folio *folio = page_folio(rpages[i]); loff_t start = (loff_t)folio->index << PAGE_SHIFT; + loff_t offset = from > start ? from - start : 0; - if (from > start) { - folio_zero_segment(folio, from - start, - folio_size(folio)); + folio_zero_segment(folio, offset, folio_size(folio)); + + if (from >= start) break; - } - folio_zero_segment(folio, 0, folio_size(folio)); } f2fs_compress_write_end(inode, fsdata, start_idx, true); - return 0; + + err = filemap_write_and_wait_range(inode->i_mapping, + round_down(from, 1 << log_cluster_size << PAGE_SHIFT), + LLONG_MAX); + if (err) + return err; + + truncate_pagecache(inode, from); + + return f2fs_do_truncate_blocks(inode, round_up(from, PAGE_SIZE), lock); } static int f2fs_write_compressed_pages(struct compress_ctx *cc, -- 2.49.0 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly Chao Yu via Linux-f2fs-devel @ 2025-08-05 2:42 ` Jaegeuk Kim via Linux-f2fs-devel 2025-08-05 2:51 ` Chao Yu via Linux-f2fs-devel 0 siblings, 1 reply; 5+ messages in thread From: Jaegeuk Kim via Linux-f2fs-devel @ 2025-08-05 2:42 UTC (permalink / raw) To: Chao Yu; +Cc: linux-kernel, linux-f2fs-devel Chao, it seems you made a clean up before this? Can you post it first? On 08/04, Chao Yu wrote: > generic/091 may fail, then it bisects to the bad commit ba8dac350faf > ("f2fs: fix to zero post-eof page"). > > What will cause generic/091 to fail is something like below Testcase #1: > 1. write 16k as compressed blocks > 2. truncate to 12k > 3. truncate to 20k > 4. verify data in range of [12k, 16k], however data is not zero as > expected > > Script of Testcase #1 > mkfs.f2fs -f -O extra_attr,compression /dev/vdb > mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs > dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 > dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc > sync > truncate -s $((12*1024)) /mnt/f2fs/file > truncate -s $((20*1024)) /mnt/f2fs/file > dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 > od /mnt/f2fs/data > umount /mnt/f2fs > > Analisys: > in step 2), we will redirty all data pages from #0 to #3 in compressed > cluster, and zero page #3, > in step 3), f2fs_setattr() will call f2fs_zero_post_eof_page() to drop > all page cache post eof, includeing dirtied page #3, > in step 4) when we read data from page #3, it will decompressed cluster > and extra random data to page #3, finally, we hit the non-zeroed data > post eof. > > However, the commit ba8dac350faf ("f2fs: fix to zero post-eof page") just > let the issue be reproduced easily, w/o the commit, it can reproduce this > bug w/ below Testcase #2: > 1. write 16k as compressed blocks > 2. truncate to 8k > 3. truncate to 12k > 4. truncate to 20k > 5. verify data in range of [12k, 16k], however data is not zero as > expected > > Script of Testcase #2 > mkfs.f2fs -f -O extra_attr,compression /dev/vdb > mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs > dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 > dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc > sync > truncate -s $((8*1024)) /mnt/f2fs/file > truncate -s $((12*1024)) /mnt/f2fs/file > truncate -s $((20*1024)) /mnt/f2fs/file > echo 3 > /proc/sys/vm/drop_caches > dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 > od /mnt/f2fs/data > umount /mnt/f2fs > > Anlysis: > in step 2), we will redirty all data pages from #0 to #3 in compressed > cluster, and zero page #2 and #3, > in step 3), we will truncate page #3 in page cache, > in step 4), expand file size, > in step 5), hit random data post eof w/ the same reason in Testcase #1. > > Root Cause: > In f2fs_truncate_partial_cluster(), after we truncate partial data block > on compressed cluster, all pages in cluster including the one post eof > will be dirtied, after another tuncation, dirty page post eof will be > dropped, however on-disk compressed cluster is still valid, it includes > invalid data post eof, result in exposing previous data post eof while > reading. > > Fix: > In f2fs_truncate_partial_cluster(), let change as below to fix: > - call filemap_write_and_wait_range() to flush dirty page > - call truncate_pagecache() to drop pages or zero partial page post eof > - call f2fs_do_truncate_blocks() to truncate non-compress cluster to > last vali block > > Fixes: 3265d3db1f16 ("f2fs: support partial truncation on compressed inode") > Reported-by: Jan Prusakowski <jprusakowski@google.com> > Signed-off-by: Chao Yu <chao@kernel.org> > --- > v2: > - should dirty & flush all pages in cluster and truncate blocks post eof > later > fs/f2fs/compress.c | 20 ++++++++++++++------ > 1 file changed, 14 insertions(+), 6 deletions(-) > > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c > index e37a7ed801e5..6ad8d3bc6df7 100644 > --- a/fs/f2fs/compress.c > +++ b/fs/f2fs/compress.c > @@ -1245,17 +1245,25 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) > for (i = (1 << log_cluster_size) - 1; i >= 0; i--) { > struct folio *folio = page_folio(rpages[i]); > loff_t start = (loff_t)folio->index << PAGE_SHIFT; > + loff_t offset = from > start ? from - start : 0; > > - if (from > start) { > - folio_zero_segment(folio, from - start, > - folio_size(folio)); > + folio_zero_segment(folio, offset, folio_size(folio)); > + > + if (from >= start) > break; > - } > - folio_zero_segment(folio, 0, folio_size(folio)); > } > > f2fs_compress_write_end(inode, fsdata, start_idx, true); > - return 0; > + > + err = filemap_write_and_wait_range(inode->i_mapping, > + round_down(from, 1 << log_cluster_size << PAGE_SHIFT), > + LLONG_MAX); > + if (err) > + return err; > + > + truncate_pagecache(inode, from); > + > + return f2fs_do_truncate_blocks(inode, round_up(from, PAGE_SIZE), lock); > } > > static int f2fs_write_compressed_pages(struct compress_ctx *cc, > -- > 2.49.0 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly 2025-08-05 2:42 ` Jaegeuk Kim via Linux-f2fs-devel @ 2025-08-05 2:51 ` Chao Yu via Linux-f2fs-devel 0 siblings, 0 replies; 5+ messages in thread From: Chao Yu via Linux-f2fs-devel @ 2025-08-05 2:51 UTC (permalink / raw) To: Jaegeuk Kim; +Cc: linux-kernel, linux-f2fs-devel Jaegeuk, sure, let me change the order. On 8/5/25 10:42, Jaegeuk Kim wrote: > Chao, it seems you made a clean up before this? Can you post it first? > > On 08/04, Chao Yu wrote: >> generic/091 may fail, then it bisects to the bad commit ba8dac350faf >> ("f2fs: fix to zero post-eof page"). >> >> What will cause generic/091 to fail is something like below Testcase #1: >> 1. write 16k as compressed blocks >> 2. truncate to 12k >> 3. truncate to 20k >> 4. verify data in range of [12k, 16k], however data is not zero as >> expected >> >> Script of Testcase #1 >> mkfs.f2fs -f -O extra_attr,compression /dev/vdb >> mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs >> dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 >> dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc >> sync >> truncate -s $((12*1024)) /mnt/f2fs/file >> truncate -s $((20*1024)) /mnt/f2fs/file >> dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 >> od /mnt/f2fs/data >> umount /mnt/f2fs >> >> Analisys: >> in step 2), we will redirty all data pages from #0 to #3 in compressed >> cluster, and zero page #3, >> in step 3), f2fs_setattr() will call f2fs_zero_post_eof_page() to drop >> all page cache post eof, includeing dirtied page #3, >> in step 4) when we read data from page #3, it will decompressed cluster >> and extra random data to page #3, finally, we hit the non-zeroed data >> post eof. >> >> However, the commit ba8dac350faf ("f2fs: fix to zero post-eof page") just >> let the issue be reproduced easily, w/o the commit, it can reproduce this >> bug w/ below Testcase #2: >> 1. write 16k as compressed blocks >> 2. truncate to 8k >> 3. truncate to 12k >> 4. truncate to 20k >> 5. verify data in range of [12k, 16k], however data is not zero as >> expected >> >> Script of Testcase #2 >> mkfs.f2fs -f -O extra_attr,compression /dev/vdb >> mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs >> dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1 >> dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc >> sync >> truncate -s $((8*1024)) /mnt/f2fs/file >> truncate -s $((12*1024)) /mnt/f2fs/file >> truncate -s $((20*1024)) /mnt/f2fs/file >> echo 3 > /proc/sys/vm/drop_caches >> dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3 >> od /mnt/f2fs/data >> umount /mnt/f2fs >> >> Anlysis: >> in step 2), we will redirty all data pages from #0 to #3 in compressed >> cluster, and zero page #2 and #3, >> in step 3), we will truncate page #3 in page cache, >> in step 4), expand file size, >> in step 5), hit random data post eof w/ the same reason in Testcase #1. >> >> Root Cause: >> In f2fs_truncate_partial_cluster(), after we truncate partial data block >> on compressed cluster, all pages in cluster including the one post eof >> will be dirtied, after another tuncation, dirty page post eof will be >> dropped, however on-disk compressed cluster is still valid, it includes >> invalid data post eof, result in exposing previous data post eof while >> reading. >> >> Fix: >> In f2fs_truncate_partial_cluster(), let change as below to fix: >> - call filemap_write_and_wait_range() to flush dirty page >> - call truncate_pagecache() to drop pages or zero partial page post eof >> - call f2fs_do_truncate_blocks() to truncate non-compress cluster to >> last vali block >> >> Fixes: 3265d3db1f16 ("f2fs: support partial truncation on compressed inode") >> Reported-by: Jan Prusakowski <jprusakowski@google.com> >> Signed-off-by: Chao Yu <chao@kernel.org> >> --- >> v2: >> - should dirty & flush all pages in cluster and truncate blocks post eof >> later >> fs/f2fs/compress.c | 20 ++++++++++++++------ >> 1 file changed, 14 insertions(+), 6 deletions(-) >> >> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c >> index e37a7ed801e5..6ad8d3bc6df7 100644 >> --- a/fs/f2fs/compress.c >> +++ b/fs/f2fs/compress.c >> @@ -1245,17 +1245,25 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock) >> for (i = (1 << log_cluster_size) - 1; i >= 0; i--) { >> struct folio *folio = page_folio(rpages[i]); >> loff_t start = (loff_t)folio->index << PAGE_SHIFT; >> + loff_t offset = from > start ? from - start : 0; >> >> - if (from > start) { >> - folio_zero_segment(folio, from - start, >> - folio_size(folio)); >> + folio_zero_segment(folio, offset, folio_size(folio)); >> + >> + if (from >= start) >> break; >> - } >> - folio_zero_segment(folio, 0, folio_size(folio)); >> } >> >> f2fs_compress_write_end(inode, fsdata, start_idx, true); >> - return 0; >> + >> + err = filemap_write_and_wait_range(inode->i_mapping, >> + round_down(from, 1 << log_cluster_size << PAGE_SHIFT), >> + LLONG_MAX); >> + if (err) >> + return err; >> + >> + truncate_pagecache(inode, from); >> + >> + return f2fs_do_truncate_blocks(inode, round_up(from, PAGE_SIZE), lock); >> } >> >> static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> -- >> 2.49.0 _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-08-05 2:52 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-08-04 1:43 [f2fs-dev] [PATCH v2 1/3] f2fs: fix to avoid overflow while left shift operation Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 2/3] f2fs: clean up f2fs_truncate_partial_cluster() Chao Yu via Linux-f2fs-devel 2025-08-04 1:43 ` [f2fs-dev] [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed file correctly Chao Yu via Linux-f2fs-devel 2025-08-05 2:42 ` Jaegeuk Kim via Linux-f2fs-devel 2025-08-05 2:51 ` Chao Yu via Linux-f2fs-devel
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).