* Issue with JFFS2 and a_ops->dirty_folio
@ 2024-06-13 7:05 Jean-Michel Hautbois
2024-06-13 11:20 ` Christoph Hellwig
0 siblings, 1 reply; 4+ messages in thread
From: Jean-Michel Hautbois @ 2024-06-13 7:05 UTC (permalink / raw)
To: linux-fsdevel, linux-mm, linux-mtd; +Cc: willy, Andrew Morton, linux-m68k
Hi everyone !
I am currently working on a Coldfire (MPC54418) and quite everything
goes well, except that I can only execute one command from user space
before getting a segmentation fault on the do_exit() syscall.
I tried to debug it and it appears to be failing in folio_mark_dirty()
on the 'return mapping->a_ops->dirty_folio(mapping, folio);' call.
I added a VM_BUG_ON_FOLIO():
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index c2a48592c258..122ca2253263 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2857,9 +2857,9 @@ bool folio_mark_dirty(struct folio *folio)
*/
if (folio_test_reclaim(folio))
folio_clear_reclaim(folio);
- if (mapping->a_ops->dirty_folio)
- return mapping->a_ops->dirty_folio(mapping, folio);
- return noop_dirty_folio(mapping, folio);
+
+ VM_BUG_ON_FOLIO(!mapping->a_ops->dirty_folio, folio);
+ return mapping->a_ops->dirty_folio(mapping, folio);
}
return noop_dirty_folio(mapping, folio);
And it appears that this is because if tries unconditionally to call the
a_ops->dirty_folio() function in JFFS2. The bug report is at the bottom
of this mail. We see: aops:0x41340ae0
Which, in my build, leads to jffs2_file_address_operations.
And indeed, there is no .dirty_folio nor anything relating to folios in
there.
I don't really know how to solve this though, as I am no expert in this
specific part at all !
Thanks for your answers,
BR
JM
---
bash-5.2# ls
bin etc lib32 mnt root sys usr
data home [ 9.730000] page: refcount:2 mapcount:1
mapping:42097964 index:0x97 pfn:0x27f01
[ 9.740000] aops:0x41340ae0 ino:b6 dentry name:"libc.so.6"
9.740000] flags: 0x28(uptodate|lru|zone=0)
;36ml[ 9.750000] raw: 00000028 4fed39bc 4ffd9d24 42097964 00000097
00000000 00000000 00000002
inux[ 9.760000] raw: 4fe02000
rc 9.760000] page dumped because:
VM_BUG_ON_FOLIO(!mapping->a_ops->dirty_folio)
[m [ 9.770000] kernel BUG at mm/page-writeback.c:2861!
[ 9.770000] *** TRAP #7 *** FORMAT=4
[ 9.770000] Current process id is 24
[ 9.770000] BAD KERNEL TRAP: 00000000
[ 9.770000] PC: [<41058ff2>] folio_mark_dirty+0x68/0x82
[ 9.770000] SR: 2010 SP: 41dcddb4 a2: 418fb710
[ 9.770000] d0: 00000027 d1: 0000009e d2: 4ffd9c24 d3: 60160000
[ 9.770000] d4: 4fe03419 d5: 4ffd9c24 a0: 41dcdd00 a1: 414491f0
[ 9.770000] Process ls (pid: 24, task=418fb710)
[ 9.770000] Frame format=4 eff addr=413d4c3c pc=413dbd16
[ 9.770000] Stack from 41dcddf0:
[ 9.770000] 00000b2d 413dea77 413dbce8 4fe03419 41dcdf1a
410750ee 4ffd9c24 00000000
[ 9.770000] ffffffff fffffffe 41dcde9e 60164000 00000001
41317ea4 41074d0c 41078bb0
[ 9.770000] 00000001 41d67034 ffffffff 41dd6600 60164000
41dd6600 41d6e3d0 41dcc000
[ 9.770000] 41d6e3fc 00000000 00000000 00000000 00000000
41dcdf5c 410753f2 41dcdf1a
[ 9.770000] 41d67034 60160000 60164000 41dcde9e 41d6e3fc
41dcdef6 41dcdf1a 4102a940
[ 9.770000] 41d6e3d4 41d67344 41d6e3d0 41dc0000 00000100
00000003 4107ad24 41dcdf1a
[ 9.770000] Call Trace: [<410750ee>] unmap_page_range+0x3e2/0x672
[ 9.770000] [<41317ea4>] mas_find+0x0/0xfa
[ 9.770000] [<41074d0c>] unmap_page_range+0x0/0x672
[ 9.770000] [<41078bb0>] vma_next+0x0/0x14
[ 9.770000] [<410753f2>] unmap_vmas+0x74/0x98
[ 9.770000] [<4102a940>] up_read+0x0/0x34
[ 9.770000] [<4107ad24>] exit_mmap+0xd4/0x1c0
[ 9.770000] [<410093f8>] arch_local_irq_enable+0x0/0xc
[ 9.770000] [<410093ec>] arch_local_irq_disable+0x0/0xc
[ 9.770000] [<41006bfa>] __mmput+0x2e/0x86
[ 9.770000] [<4100a168>] do_exit+0x21e/0x6f2
[ 9.770000] [<4100a7ba>] sys_exit_group+0x0/0x14
[ 9.770000] [<4100a778>] do_group_exit+0x22/0x64
[ 9.770000] [<4100a7ce>] pid_child_should_wake+0x0/0x56
[ 9.770000] [<410058c8>] system_call+0x54/0xa8
[ 9.770000]
[ 9.770000] Code: bd16 4879 413d 4c3c 4eb9 4132 b30a 4e47 <2f02> 2f0b
4e90 508f 241f 265f 4e75 2f02 42a7 4eb9 4105 8f56 60ec 2f0b 266f 0008
[ 9.770000] Disabling lock debugging due to kernel taint
[ 9.770000] note: ls[24] exited with irqs disabled
9.780000] Fixing recursive fault but reboot is needed!
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: Issue with JFFS2 and a_ops->dirty_folio
2024-06-13 7:05 Issue with JFFS2 and a_ops->dirty_folio Jean-Michel Hautbois
@ 2024-06-13 11:20 ` Christoph Hellwig
2024-06-13 12:57 ` Jean-Michel Hautbois
2024-06-14 10:21 ` Jean-Michel Hautbois
0 siblings, 2 replies; 4+ messages in thread
From: Christoph Hellwig @ 2024-06-13 11:20 UTC (permalink / raw)
To: Jean-Michel Hautbois
Cc: linux-fsdevel, linux-mm, linux-mtd, willy, Andrew Morton,
linux-m68k
On Thu, Jun 13, 2024 at 09:05:17AM +0200, Jean-Michel Hautbois wrote:
> Hi everyone !
>
> I am currently working on a Coldfire (MPC54418) and quite everything goes
> well, except that I can only execute one command from user space before
> getting a segmentation fault on the do_exit() syscall.
Looks like jffs2 is simply missing a dirty_folio implementation. The
simple filemap_dirty_folio should do the job, please try the patch
below:
diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index 62ea76da7fdf23..7124cbad6c35ae 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -19,6 +19,7 @@
#include <linux/highmem.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
+#include <linux/writeback.h>
#include "nodelist.h"
static int jffs2_write_end(struct file *filp, struct address_space *mapping,
@@ -75,6 +76,7 @@ const struct address_space_operations jffs2_file_address_operations =
.read_folio = jffs2_read_folio,
.write_begin = jffs2_write_begin,
.write_end = jffs2_write_end,
+ .dirty_folio = filemap_dirty_folio,
};
static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: Issue with JFFS2 and a_ops->dirty_folio
2024-06-13 11:20 ` Christoph Hellwig
@ 2024-06-13 12:57 ` Jean-Michel Hautbois
2024-06-14 10:21 ` Jean-Michel Hautbois
1 sibling, 0 replies; 4+ messages in thread
From: Jean-Michel Hautbois @ 2024-06-13 12:57 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-fsdevel, linux-mm, linux-mtd, willy, Andrew Morton,
linux-m68k
Hello Christoph,
On 13/06/2024 13:20, Christoph Hellwig wrote:
> On Thu, Jun 13, 2024 at 09:05:17AM +0200, Jean-Michel Hautbois wrote:
>> Hi everyone !
>>
>> I am currently working on a Coldfire (MPC54418) and quite everything goes
>> well, except that I can only execute one command from user space before
>> getting a segmentation fault on the do_exit() syscall.
>
> Looks like jffs2 is simply missing a dirty_folio implementation. The
> simple filemap_dirty_folio should do the job, please try the patch
> below:
>
>
> diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
> index 62ea76da7fdf23..7124cbad6c35ae 100644
> --- a/fs/jffs2/file.c
> +++ b/fs/jffs2/file.c
> @@ -19,6 +19,7 @@
> #include <linux/highmem.h>
> #include <linux/crc32.h>
> #include <linux/jffs2.h>
> +#include <linux/writeback.h>
> #include "nodelist.h"
>
> static int jffs2_write_end(struct file *filp, struct address_space *mapping,
> @@ -75,6 +76,7 @@ const struct address_space_operations jffs2_file_address_operations =
> .read_folio = jffs2_read_folio,
> .write_begin = jffs2_write_begin,
> .write_end = jffs2_write_end,
> + .dirty_folio = filemap_dirty_folio,
> };
>
> static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
Thanks, I did implement this one, but now I have another weird issue, I
don't know if this can be related...
When the bash command is launched (my init command is init=/bin/bash) I
can launch a first command (say, ls for instance) and it works fine. But
a second call to this same command or any other one juste returns as if
nothing was done... And I can't even debug, strace fails too:
execve("/bin/ls", ["/bin/ls"], 0xbfb31ef0 /* 5 vars */) = 0
brk(NULL) = 0x2ab7c000
atomic_barrier() = 0
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or
directory)
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_LARGEFILE|O_CLOEXEC) =
-1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/libresolv.so.2", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3,
"\177ELF\1\2\1\0\0\0\0\0\0\0\0\0\0\3\0\4\0\0\0\1\0\0\0\0\0\0\0004"...,
512) = 512
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH,
STATX_BASIC_STATS,
{stx_mask=STATX_TYPE|STATX_MODE|STATX_NLINK|STATX_UID|STATX_GID|STATX_MTIME|STATX_CTIME|STATX_INO|STATX_SIZE|STATX_BLOCKS|STATX_
MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0755, stx_size=43120, ...}) = 0
mmap2(NULL, 59888, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0)
=[ 15.830000] random: crng init done
0x60022000
mmap2(0x6002c000, 16384, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8000) = 0x6002c000
mmap2(0x60030000, 2544, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x60030000
close(3) = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
openat(AT_FDCWD, "/lib/libc.so.6", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
read(3,
"\177ELF\1\2\1\0\0\0\0\0\0\0\0\0\0\3\0\4\0\0\0\1\0\2\324\n\0\0\0004"...,
512) = 512
statx(3, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH,
STATX_BASIC_STATS,
{stx_mask=STATX_TYPE|STATX_MODE|STATX_NLINK|STATX_UID|STATX_GID|STATX_MTIME|STATX_CTIME|STATX_INO|STATX_SIZE|STATX_BLOCKS|STATX_
MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0755, stx_size=1257660, ...}) = 0
mmap2(NULL, 1290920, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x60032000
mmap2(0x6015e000, 24576, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12c000) = 0x6015e000
mmap2(0x60164000, 37544, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x60164000
close(3) = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
atomic_barrier() = 0
mmap2(NULL, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x6016e000
set_thread_area(0x601759c0) = 0
get_thread_area() = 0x601759c0
atomic_barrier() = 0
set_tid_address(0x6016e548) = 28
set_robust_list(0x6016e54c, 12) = 0
mprotect(0x6015e000, 8192, PROT_READ) = 0
mprotect(0x6002c000, 8192, PROT_READ) = 0
mprotect(0x2ab72000, 8192, PROT_READ) = 0
mprotect(0x6001e000, 8192, PROT_READ) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0xc00815f4} ---
+++ killed by SIGSEGV +++
I suppose this can be related to the ELF_DT_DYN_BASE address, but I
can't see what is going on yet.
Thanks,
JM
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Issue with JFFS2 and a_ops->dirty_folio
2024-06-13 11:20 ` Christoph Hellwig
2024-06-13 12:57 ` Jean-Michel Hautbois
@ 2024-06-14 10:21 ` Jean-Michel Hautbois
1 sibling, 0 replies; 4+ messages in thread
From: Jean-Michel Hautbois @ 2024-06-14 10:21 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-fsdevel, linux-mm, linux-mtd, willy, Andrew Morton,
linux-m68k
Hi there,
On 13/06/2024 13:20, Christoph Hellwig wrote:
> On Thu, Jun 13, 2024 at 09:05:17AM +0200, Jean-Michel Hautbois wrote:
>> Hi everyone !
>>
>> I am currently working on a Coldfire (MPC54418) and quite everything goes
>> well, except that I can only execute one command from user space before
>> getting a segmentation fault on the do_exit() syscall.
>
> Looks like jffs2 is simply missing a dirty_folio implementation. The
> simple filemap_dirty_folio should do the job, please try the patch
> below:
>
>
> diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
> index 62ea76da7fdf23..7124cbad6c35ae 100644
> --- a/fs/jffs2/file.c
> +++ b/fs/jffs2/file.c
> @@ -19,6 +19,7 @@
> #include <linux/highmem.h>
> #include <linux/crc32.h>
> #include <linux/jffs2.h>
> +#include <linux/writeback.h>
> #include "nodelist.h"
>
> static int jffs2_write_end(struct file *filp, struct address_space *mapping,
> @@ -75,6 +76,7 @@ const struct address_space_operations jffs2_file_address_operations =
> .read_folio = jffs2_read_folio,
> .write_begin = jffs2_write_begin,
> .write_end = jffs2_write_end,
> + .dirty_folio = filemap_dirty_folio,
> };
>
> static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
I managed to modify my rootfs and I am using ubifs. It is far more
complete and indeed an error occurs in the ubifs_dirty_folio() call. I
don't know if there is any interest there, but the volume is mounted
read-only on purpose:
[ 4.490000] ubi0: attaching mtd5
[ 4.500000] ubi0: MTD device 5 is write-protected, attach in
read-only mode
[ 4.890000] ubi0: scanning is finished
[ 4.940000] ubi0 warning: autoresize: skip auto-resize because of R/O
mode
[ 4.950000] ubi0: attached mtd5 (name "root2", size 64 MiB)
[ 4.960000] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976
bytes
[ 4.960000] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[ 4.970000] ubi0: VID header offset: 2048 (aligned 2048), data
offset: 4096
[ 4.980000] ubi0: good PEBs: 512, bad PEBs: 0, corrupted PEBs: 0
[ 4.980000] ubi0: user volume: 1, internal volumes: 1, max. volumes
count: 128
[ 4.990000] ubi0: max/mean erase counter: 0/0, WL threshold: 4096,
image sequence number: 1619801263
[ 5.000000] ubi0: available PEBs: 215, total reserved PEBs: 297, PEBs
reserved for bad PEB handling: 80
[ 5.010000] ubi0: background thread "ubi_bgt0d" started, PID 25
[ 5.020000] UBIFS (ubi0:0): read-only UBI device
[ 5.030000] UBIFS (ubi0:0): Mounting in unauthenticated mode
[ 5.130000] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
name "root2", R/O mode
[ 5.140000] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB),
min./max. I/O unit sizes: 2048 bytes/2048 bytes
[ 5.150000] UBIFS (ubi0:0): FS size: 25649152 bytes (24 MiB, 202
LEBs), max 2048 LEBs, journal size 9023488 bytes (8 MiB, 72 LEBs)
[ 5.160000] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[ 5.160000] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
UUID 43E9BD53-C843-4AEE-897E-5684A795D2C8, small LPT model
[ 5.180000] VFS: Mounted root (ubifs filesystem) readonly on device 0:13.
And when I execute a command:
bash-5.2# ls
bin etc lib32 mnt root sys
12.560000] UBIFS error (ubi0:0 pid 26): ubifs_assert_failed: UBIFS
assert failed: ret == false, in fs/ubifs/file.c:1477
4mus[ 12.570000] UBIFS warning (ubi0:0 pid 26): ubifs_ro_mode:
switched to read-only mode, error -22
r 12.580000] CPU: 0 PID: 26 Comm: ls Not tainted
6.10.0-rc3stmark2-001-00024-gc73f39277b30-dirty #455
[ 12.580000] Stack from 42937dac:
[ 12.580000] 42937dac 414a3f4c 414a3f4c 4fed4a01 41dda000
420885cc 413e2b20 414a3f4c
[ 12.580000] 41115112 41dda000 ffffffea 414b3c43 000005c5
4fed4a98 6015e000 4105e058
[ 12.580000] 420885cc 4fed4a98 415ed419 42937f12 4107ae8a
4fed4a98 00000000 ffffffff
[ 12.580000] fffffffe 42937e96 60162000 00000001 413c03ce
4107aa9e 4107ea74 00000001
[ 12.580000] 42909034 ffffffff 42940600 60162000 42940600
428c0450 42936000 428c048c
[ 12.580000] 00000000 00000000 00000000 00000000 42937f54
4107b19c 42937f12 42909034
[ 12.580000] Call Trace: [<413e2b20>] dump_stack+0xc/0x10
[ 12.580000] [<41115112>] ubifs_dirty_folio+0x3e/0x4a
[ 12.580000] [<4105e058>] folio_mark_dirty+0x6e/0x82
[ 12.580000] [<4107ae8a>] unmap_page_range+0x3ec/0x68a
[ 12.580000] [<413c03ce>] mas_find+0x0/0xfa
[ 12.580000] [<4107aa9e>] unmap_page_range+0x0/0x68a
[ 12.580000] [<4107ea74>] vma_next+0x0/0x14
[ 12.580000] [<4107b19c>] unmap_vmas+0x74/0x98
[ 12.580000] [<4102cc68>] up_read+0x0/0x2a
[ 12.580000] [<41080c14>] exit_mmap+0xd4/0x1d2
[ 12.580000] [<410098ff>] will_become_orphaned_pgrp+0x27/0x7c
[ 12.580000] [<413e6eb4>] _raw_spin_unlock_irq+0x0/0x14
[ 12.580000] [<41006db0>] __mmput+0x2e/0xa0
[ 12.580000] [<4100a7ae>] do_exit+0x264/0x764
[ 12.580000] [<413e6ec4>] _raw_spin_unlock_irq+0x10/0x14
[ 12.580000] [<4100adfe>] do_group_exit+0x26/0x78
[ 12.580000] [<4100ae50>] sys_exit_group+0x0/0x14
[ 12.580000] [<4100ae64>] pid_child_should_wake+0x0/0x56
[ 12.580000] [<41005918>] system_call+0x54/0xa8
[ 12.580000]
[ 17.600000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:1016
[ 17.610000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: !c->ro_media && !c->ro_mount, in fs/ubifs/journal.c:108
[ 17.620000] UBIFS error (ubi0:0 pid 24): make_reservation: cannot
reserve 4144 bytes in jhead 2, error -30
[ 17.630000] UBIFS error (ubi0:0 pid 24): do_writepage: cannot write
folio 150 of inode 156, error -30
[ 17.640000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:944
[ 17.650000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: c->bi.dd_growth >= 0, in fs/ubifs/budget.c:550
[ 26.440000] UBIFS error (ubi0:0 pid 27): ubifs_assert_failed: UBIFS
assert failed: ret == false, in fs/ubifs/file.c:1477
[ 26.450000] UBIFS error (ubi0:0 pid 27): ubifs_assert_failed: UBIFS
assert failed: ret == false, in fs/ubifs/file.c:1477
[ 26.460000] UBIFS error (ubi0:0 pid 27): ubifs_assert_failed: UBIFS
assert failed: ret == false, in fs/ubifs/file.c:1477
[ 31.520000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:1016
[ 31.530000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: !c->ro_media && !c->ro_mount, in fs/ubifs/journal.c:108
[ 31.540000] UBIFS error (ubi0:0 pid 24): make_reservation: cannot
reserve 4144 bytes in jhead 2, error -30
[ 31.550000] UBIFS error (ubi0:0 pid 24): do_writepage: cannot write
folio 101 of inode 81, error -30
[ 31.560000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:944
[ 31.570000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: c->bi.dd_growth >= 0, in fs/ubifs/budget.c:550
[ 36.640000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:1016
[ 36.650000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: !c->ro_media && !c->ro_mount, in fs/ubifs/journal.c:108
[ 36.660000] UBIFS error (ubi0:0 pid 24): make_reservation: cannot
reserve 4144 bytes in jhead 2, error -30
[ 36.670000] UBIFS error (ubi0:0 pid 24): do_writepage: cannot write
folio 102 of inode 81, error -30
[ 36.680000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:944
[ 36.690000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: c->bi.dd_growth >= 0, in fs/ubifs/budget.c:550
[ 36.700000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:1016
[ 36.710000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: !c->ro_media && !c->ro_mount, in fs/ubifs/journal.c:108
[ 36.720000] UBIFS error (ubi0:0 pid 24): make_reservation: cannot
reserve 4144 bytes in jhead 2, error -30
[ 36.730000] UBIFS error (ubi0:0 pid 24): do_writepage: cannot write
folio 150 of inode 156, error -30
[ 36.740000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: folio->private != NULL, in fs/ubifs/file.c:944
[ 36.750000] UBIFS error (ubi0:0 pid 24): ubifs_assert_failed: UBIFS
assert failed: c->bi.dd_growth >= 0, in fs/ubifs/budget.c:550
From now on, nothing happens anymore.
What can cause this ? I see in the file.c:
"An attempt to dirty a page without budgeting for it - should not happen."
So, why can it happen :-) ?
Thanks a lot !
BR,
JM
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-06-14 10:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-13 7:05 Issue with JFFS2 and a_ops->dirty_folio Jean-Michel Hautbois
2024-06-13 11:20 ` Christoph Hellwig
2024-06-13 12:57 ` Jean-Michel Hautbois
2024-06-14 10:21 ` Jean-Michel Hautbois
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).