* Wislist for Linux from the mold linker's POV
@ 2024-11-28 2:52 Rui Ueyama
2024-11-28 17:41 ` Florian Weimer
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Rui Ueyama @ 2024-11-28 2:52 UTC (permalink / raw)
To: LKML
Hi,
I'm the author of the mold linker. I developed mold for speed, and I
think I achieved that goal. As a ballpark number, mold can create a 1
GiB executable in a second on a recent 32-core x86 machine. While
developing mold, I noticed that the kernel's performance occasionally
became a bottleneck. I’d like to share these observations as a
wishlist so that kernel developers can at least recognize potential
areas for improvement.
mold might be somewhat unique from the kernel's point of view. Speed
is the utmost goal for the program, so we care about every
millisecond. Its performance characteristics are very bursty: as soon
as the linker is invoked, it reads hundreds or thousands of object
files, creates a multi-gibibyte output file, and then exits, while
utilizing all available cores on a machine, all within just a few
seconds.
Here is what I noticed while developing mold:
- exit(2) takes a few hundred milliseconds for a large process
I believe this is because mold mmaps all input files and an output
file, and clearing/flushing memory-mapped data is fairly expensive. I
wondered if this could be improved. If it is unavoidable, could the
cleanup process be made asynchronous so that exit(2) takes effect
immediately?
To avoid this overhead, mold currently forks a child process, lets the
child handle the actual linking task, and then, as soon as the child
closes the output file, the parent exits (which takes no time since
the parent is lightweight). Since the child is not an interactive
process, it can afford to take its time for exit. While this works, I
would prefer to avoid it if possible, as it is somewhat a hacky
workaround.
- Writing to a fresh file is slower than writing to an existing file
mold can link a 4 GiB LLVM/clang executable in ~1.8 seconds on my
machine if the linker reuses an existing file and overwrites it.
However, the speed decreases to ~2.8 seconds if the output file does
not exist and mold needs to create a fresh file. I tried using
fallocate(2) to preallocate disk blocks, but it didn't help. While 4
GiB is not small, should creating a file really take almost a second?
- Lack of a safe system-wide semaphore
mold is multi-threaded itself, so it doesn't make much sense to run
multiple instances of the linker in parallel if the number of cores
is, say, less than 16. In fact, doing so could decrease performance
because the working set increases as the number of linker processes
grows. In the worst case, they may even crash due to OOM. Therefore,
we want mold to wait for other mold processes to terminate if another
instance is already running. However, achieving this appears to be
difficult.
Currently, we are using a lockfile. This approach is simple and
reliable -- a file lock is guaranteed to be released by the kernel if
the process exits, whether gracefully or unexpectedly. However, this
only allows one active process at a time. If your machine has 64
cores, you may want to run a few linker processes simultaneously.
However, allowing up to N processes where N>1 is significantly harder.
POSIX semaphores are not released on process exit, so it may cause
resource leaks. We could run a daemon process to count the number of
active processes, but that feels overkill for achieving this goal.
After all, we just want a system-wide semaphore that is guaranteed to
be released on process exit. But it seems like such a thing doesn't
exist.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-28 2:52 Wislist for Linux from the mold linker's POV Rui Ueyama
@ 2024-11-28 17:41 ` Florian Weimer
2024-11-29 0:44 ` Rui Ueyama
2024-11-29 7:17 ` наб
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Florian Weimer @ 2024-11-28 17:41 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML
* Rui Ueyama:
> - exit(2) takes a few hundred milliseconds for a large process
>
> I believe this is because mold mmaps all input files and an output
> file, and clearing/flushing memory-mapped data is fairly expensive. I
> wondered if this could be improved. If it is unavoidable, could the
> cleanup process be made asynchronous so that exit(2) takes effect
> immediately?
It's definitely a two-edged sword. For example, when running parallel
make (or Ninja), it's essential that process exit is only signaled
after all process-related resources have been released. Otherwise,
it's possible to see spurious failures because make respawns processes
so quickly that some resource limit is exceeded. This is already a
problem today, and more lazy resource deallocation on exit would make
it more prevalent.
The situation is already bad enough that many developers have resorted
to retry loops around fork/clone/pthread_create if an EAGAIN error is
encountered, assuming that it's related to this.
Bug 154011 - Task exit is signaled before task resource
deallocation, leading to bogus EAGAIN errors
<https://bugzilla.kernel.org/show_bug.cgi?id=154011>
> - Writing to a fresh file is slower than writing to an existing file
>
> mold can link a 4 GiB LLVM/clang executable in ~1.8 seconds on my
> machine if the linker reuses an existing file and overwrites it.
> However, the speed decreases to ~2.8 seconds if the output file does
> not exist and mold needs to create a fresh file. I tried using
> fallocate(2) to preallocate disk blocks, but it didn't help. While 4
> GiB is not small, should creating a file really take almost a second?
Which file system is that?
> - Lack of a safe system-wide semaphore
Other toolchain components use the make jobserver protocol for that.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-28 17:41 ` Florian Weimer
@ 2024-11-29 0:44 ` Rui Ueyama
2024-11-29 5:38 ` Niklas Hambüchen
0 siblings, 1 reply; 11+ messages in thread
From: Rui Ueyama @ 2024-11-29 0:44 UTC (permalink / raw)
To: Florian Weimer; +Cc: LKML
On Fri, Nov 29, 2024 at 2:41 AM Florian Weimer <fw@deneb.enyo.de> wrote:
>
> * Rui Ueyama:
>
> > - exit(2) takes a few hundred milliseconds for a large process
> >
> > I believe this is because mold mmaps all input files and an output
> > file, and clearing/flushing memory-mapped data is fairly expensive. I
> > wondered if this could be improved. If it is unavoidable, could the
> > cleanup process be made asynchronous so that exit(2) takes effect
> > immediately?
>
> It's definitely a two-edged sword. For example, when running parallel
> make (or Ninja), it's essential that process exit is only signaled
> after all process-related resources have been released. Otherwise,
> it's possible to see spurious failures because make respawns processes
> so quickly that some resource limit is exceeded. This is already a
> problem today, and more lazy resource deallocation on exit would make
> it more prevalent.
>
> The situation is already bad enough that many developers have resorted
> to retry loops around fork/clone/pthread_create if an EAGAIN error is
> encountered, assuming that it's related to this.
I think you are right. Making exit(2) asynchronous may cause that issue.
Can we simply solve the problem by just making exit(2) significantly
faster than it is now? That's the way of thinking when we created the
mold linker. I don't know much about what exit(2) actually does in the
kernel, but there might be room for improvements, given that it
currently takes a few hundred milliseconds for us when linking a large
program. I wish it could be an order of magnitude or two faster.
> Bug 154011 - Task exit is signaled before task resource
> deallocation, leading to bogus EAGAIN errors
> <https://bugzilla.kernel.org/show_bug.cgi?id=154011>
>
> > - Writing to a fresh file is slower than writing to an existing file
> >
> > mold can link a 4 GiB LLVM/clang executable in ~1.8 seconds on my
> > machine if the linker reuses an existing file and overwrites it.
> > However, the speed decreases to ~2.8 seconds if the output file does
> > not exist and mold needs to create a fresh file. I tried using
> > fallocate(2) to preallocate disk blocks, but it didn't help. While 4
> > GiB is not small, should creating a file really take almost a second?
>
> Which file system is that?
ext4 on a PCIe Gen.5 SSD, but I guess it probably doesn't matter much
because we observed similar results even on tmpfs (~1.75s vs. 2.45s
when linking clang).
> > - Lack of a safe system-wide semaphore
>
> Other toolchain components use the make jobserver protocol for that.
The make jobserver protocol is designed for single-threaded processes
and doesn't fit well with our program. But yeah, we probably need a
better user-space coordination mechanism that works for both
single-threaded and multi-threaded programs.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-29 0:44 ` Rui Ueyama
@ 2024-11-29 5:38 ` Niklas Hambüchen
2024-11-29 18:12 ` Theodore Ts'o
0 siblings, 1 reply; 11+ messages in thread
From: Niklas Hambüchen @ 2024-11-29 5:38 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML, Florian Weimer
Hi Rui,
On 2024-11-29 01:44, Rui Ueyama wrote:
> ext4 on a PCIe Gen.5 SSD, but I guess it probably doesn't matter much
> because we observed similar results even on tmpfs
When dealing with ext4, there's another behaviour useful to know, which is the opposite of what you observed:
When files are overwritten, it can suddenly be ~10x slower than if they are deleted and written from scratch.
I thought that shouldn't be, because the files are re-written from scratch using `O_WRONLY|O_CREAT|O_TRUNC` in both cases.
Turns out, `ext4` has built in a feature to work around bad applications forgetting `fsync()`:
`close()`ing new files is fast.
But if you `close()` existing files after writing them from scratch, or atomic-rename something replacing them, ext4 will insert an `fsync()`!
Sources:
* https://superuser.com/questions/865710/write-to-newfile-vs-overwriting-performance-issue/872056
* https://www.kernel.org/doc/html/latest/admin-guide/ext4.html section `auto_da_alloc`
* https://en.wikipedia.org/wiki/Ext4#Delayed_allocation_and_potential_data_loss
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-28 2:52 Wislist for Linux from the mold linker's POV Rui Ueyama
2024-11-28 17:41 ` Florian Weimer
@ 2024-11-29 7:17 ` наб
2024-11-29 7:25 ` Rui Ueyama
2024-12-04 10:42 ` Bernd Petrovitsch
2024-12-04 10:43 ` Bernd Petrovitsch
3 siblings, 1 reply; 11+ messages in thread
From: наб @ 2024-11-29 7:17 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML
[-- Attachment #1: Type: text/plain, Size: 1064 bytes --]
Hi! one quick q to clarify, if you don't mind.
On Thu, Nov 28, 2024 at 11:52:35AM +0900, Rui Ueyama wrote:
> - exit(2) takes a few hundred milliseconds for a large process
>
> I believe this is because mold mmaps all input files and an output
> file, and clearing/flushing memory-mapped data is fairly expensive.
>
> To avoid this overhead, mold currently forks a child process, lets the
> child handle the actual linking task, and then, as soon as the child
> closes the output file, the parent exits (which takes no time since
> the parent is lightweight). Since the child is not an interactive
> process, it can afford to take its time for exit. While this works, I
> would prefer to avoid it if possible, as it is somewhat a hacky
> workaround.
Sooo am I reading it right that the output file is not valid when mold exits,
since you seem to be exiting /during/ exit->munmap->msync,
while the contents of the file are undefined,
so mold -o whatever && ./whatever is not valid
(while mold -o whatever then ./whatever later is)?
Thanks,
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-29 7:17 ` наб
@ 2024-11-29 7:25 ` Rui Ueyama
2024-11-29 7:37 ` наб
0 siblings, 1 reply; 11+ messages in thread
From: Rui Ueyama @ 2024-11-29 7:25 UTC (permalink / raw)
To: наб; +Cc: LKML
On Fri, Nov 29, 2024 at 4:17 PM наб <nabijaczleweli@nabijaczleweli.xyz> wrote:
>
> Hi! one quick q to clarify, if you don't mind.
>
> On Thu, Nov 28, 2024 at 11:52:35AM +0900, Rui Ueyama wrote:
> > - exit(2) takes a few hundred milliseconds for a large process
> >
> > I believe this is because mold mmaps all input files and an output
> > file, and clearing/flushing memory-mapped data is fairly expensive.
> >
> > To avoid this overhead, mold currently forks a child process, lets the
> > child handle the actual linking task, and then, as soon as the child
> > closes the output file, the parent exits (which takes no time since
> > the parent is lightweight). Since the child is not an interactive
> > process, it can afford to take its time for exit. While this works, I
> > would prefer to avoid it if possible, as it is somewhat a hacky
> > workaround.
> Sooo am I reading it right that the output file is not valid when mold exits,
> since you seem to be exiting /during/ exit->munmap->msync,
> while the contents of the file are undefined,
> so mold -o whatever && ./whatever is not valid
> (while mold -o whatever then ./whatever later is)?
No worries. The child mold process unmaps and closes an output file
before notifying the parent mold process of completion. Therefore, the
output file is guaranteed to be complete and valid when the parent
exits.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-29 7:25 ` Rui Ueyama
@ 2024-11-29 7:37 ` наб
0 siblings, 0 replies; 11+ messages in thread
From: наб @ 2024-11-29 7:37 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML
[-- Attachment #1: Type: text/plain, Size: 1905 bytes --]
On Fri, Nov 29, 2024 at 04:25:09PM +0900, Rui Ueyama wrote:
> On Fri, Nov 29, 2024 at 4:17 PM наб <nabijaczleweli@nabijaczleweli.xyz> wrote:
> > Hi! one quick q to clarify, if you don't mind.
> >
> > On Thu, Nov 28, 2024 at 11:52:35AM +0900, Rui Ueyama wrote:
> > > - exit(2) takes a few hundred milliseconds for a large process
> > >
> > > I believe this is because mold mmaps all input files and an output
> > > file, and clearing/flushing memory-mapped data is fairly expensive.
> > >
> > > To avoid this overhead, mold currently forks a child process, lets the
> > > child handle the actual linking task, and then, as soon as the child
> > > closes the output file, the parent exits (which takes no time since
> > > the parent is lightweight). Since the child is not an interactive
> > > process, it can afford to take its time for exit. While this works, I
> > > would prefer to avoid it if possible, as it is somewhat a hacky
> > > workaround.
> > Sooo am I reading it right that the output file is not valid when mold exits,
> > since you seem to be exiting /during/ exit->munmap->msync,
> > while the contents of the file are undefined,
> > so mold -o whatever && ./whatever is not valid
> > (while mold -o whatever then ./whatever later is)?
> No worries. The child mold process unmaps and closes an output file
> before notifying the parent mold process of completion. Therefore, the
> output file is guaranteed to be complete and valid when the parent
> exits.
Ah, that's alright then, naturally. To me, given the context in the
first para, the original phrasing read a little as-if mold did
parent exit() -> output undefined -> ok
\ / a few hundred milliseconds /
child -> open() -> mmap() -> link -> close() -> msync()/munmap()/exit()
Thanks :)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-29 5:38 ` Niklas Hambüchen
@ 2024-11-29 18:12 ` Theodore Ts'o
2024-11-30 15:36 ` Niklas Hambüchen
0 siblings, 1 reply; 11+ messages in thread
From: Theodore Ts'o @ 2024-11-29 18:12 UTC (permalink / raw)
To: Niklas Hambüchen; +Cc: Rui Ueyama, LKML, Florian Weimer
On Fri, Nov 29, 2024 at 06:38:47AM +0100, Niklas Hambüchen wrote:
> Turns out, `ext4` has built in a feature to work around bad applications forgetting `fsync()`:
>
> `close()`ing new files is fast.
> But if you `close()` existing files after writing them from scratch, or atomic-rename something replacing them, ext4 will insert an `fsync()`!
It's not actually an fsync() in the close case). We initiate
writeback, but we don't actually wait for the writes to complete on
the close(). In the case of rename(), we do wait for the writes to
complete before the file system transaction which commits the
rename(2) is allowed to complete. But in the case where the
application programmer is too lazy to call fsync(2), the delayed
completion of the transaction complete is the implicit commit, and
nothing is bloced behind it. (See below for more details.)
But yes, the reason behind this is applications such as tuxracer
writing the top-ten score file, and then shutting down OpenGL, and the
out-of-tree nvidia driver would sometimes^H^H^H^H^H^H^H^H^H always
crash leave a corrupted or missing top-ten score file, and this
resulted in a bunch of users whinging.
Also at one poiont, both the KDE and Gnome text editors also did the
open with O_TRUNC and rewrite, because it was the simplest way to
avoid losing the extended attrbutes (otherwise the application
programmers would have to actually copy the extended attriburtes, and
That Was Too Hard). I don't know why programmers would edit precious
source files using something *other* than emacs, or vi, but....
In essence, file system developers are massively outnumbered by
application programs, and for some reason as a class application
programmers don't seem to be very careful about data corruption
compared to file system developers --- and users *always* blame the
file system developers.
As Niklas points out in his reference, this can be disabled by a mount
option, noauto_da_alloc:
auto_da_alloc(*), noauto_da_alloc
Many broken applications don’t use fsync() when replacing
existing files via patterns such as fd =
open(“foo.new”)/write(fd,..)/close(fd)/ rename(“foo.new”,
“foo”), or worse yet, fd = open(“foo”,
O_TRUNC)/write(fd,..)/close(fd). If auto_da_alloc is enabled,
ext4 will detect the replace-via-rename and
replace-via-truncate patterns and force that any delayed
allocation blocks are allocated such that at the next journal
commit, in the default data=ordered mode, the data blocks of
the new file are forced to disk before the rename() operation
is committed. This provides roughly the same level of
guarantees as ext3, and avoids the “zero-length” problem that
can happen when a system crashes before the delayed allocation
blocks are forced to disk.
So if you care about performance above all else, and you trust all of
the application programmers responsible for programs on your system
being sufficiently careful, feel free to use the noauto_da_alloc
option. :-)
- Ted
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-29 18:12 ` Theodore Ts'o
@ 2024-11-30 15:36 ` Niklas Hambüchen
0 siblings, 0 replies; 11+ messages in thread
From: Niklas Hambüchen @ 2024-11-30 15:36 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: Rui Ueyama, LKML, Florian Weimer
Hi Ted,
On 2024-11-29 19:12, Theodore Ts'o wrote:
> It's not actually an fsync() in the close case). We initiate
> writeback, but we don't actually wait for the writes to complete on
> the close(). [..] But in the case where the
> application programmer is too lazy to call fsync(2), the delayed
> completion of the transaction complete is the implicit commit, and
> nothing is bloced behind it. (See below for more details.)
Then I actually have a question for you, as it seems I do have a situation where the close-without-rename blocks the userspace application's `close(2)` on ext4.
I have program which, when writing files, uses
openat(..., O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC)
In `strace -T`, writing 1 GiB to a file in an empty directory, it shows
close(3<output.bin>) = 0 <0.000005>
but in a directory where `file2` already exists, it takes 2.5 seconds:
close(3<output.bin>) = 0 <2.527808>
Is that expected?
Repro:
time python -c 'with open("output.bin", "wb") as f: f.write(b"a" * (1024 * 1024 * 1024))'
The first run is fast, subsequent runs are slow; `rm output.bin` makes it fast again.
Environment: Linux 6.6.33 x86_64, mount with `ext4 (ro,relatime,errors=remount-ro)`
> But yes, the reason behind this is applications such as tuxracer
Ahah glorious, I didn't know that.
"But boss, the new kernel reduces global server throughput by 10x..." -- "Whatever the cost, my tuxracer high score MUST NOT BE LOST."
> In essence, file system developers are massively outnumbered by
> application programs, and for some reason as a class application
> programmers don't seem to be very careful about data corruption
> compared to file system developers --- and users *always* blame the
> file system developers.
Personally (as an application programmer) I would probably prefer the old behaviour, because now as a correct application it is difficult to opt out of the performance penalty, and understanding your own performance and benchmarking becomes ever more complex.
Writing fast apps that do file processing with intermediate files now requires inspecting which FS we're on and what their mount options, and implementing "quirks" style workarounds like "rm + rename instead of just rename".
But I equally relate to the frustration of users that lost files, and I can understand why you added this.
One can also blame the POSIX API for this to some extent, as it doesn't make it easy for the application programmer to do the right thing.
AppDev: How do I write a file?
Posix: write() + close().
AppDev: Really?
Posix: Actually, no. You also need to fsync() if you care about the data.
AppDev: OK I added it, good?
Posix: Actually, no. You also need to fsync() the parent dir if you care about the data and the file is new.
AppDev: How many more surprise steps will there be?
Fsyncgate: Hi
I'm wondering if there's a way out of this, to make the trade-offs less global and provide an opt-out.
(As an application programmer I can't ask my users to enable `noauto_da_alloc`, because who knows what other applications they run.)
Maybe an `open(O_I_READ_THE_DOCS)` + fcntl flag, which disables wrong-application heuristics?
It should probably have a more technical name.
I realise this is fighting complexity with somewhat more complexity, but maybe buffering-by-default-and-fsync-for-durability was the wrong default all along, and close-is-durable-by-default-and-there-is-and-opt-out would be the better model; not sure.
Niklas
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-28 2:52 Wislist for Linux from the mold linker's POV Rui Ueyama
2024-11-28 17:41 ` Florian Weimer
2024-11-29 7:17 ` наб
@ 2024-12-04 10:42 ` Bernd Petrovitsch
2024-12-04 10:43 ` Bernd Petrovitsch
3 siblings, 0 replies; 11+ messages in thread
From: Bernd Petrovitsch @ 2024-12-04 10:42 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML
Hi all!
On 28.11.24 03:52, Rui Ueyama wrote:
[...]
> After all, we just want a system-wide semaphore that is guaranteed to
> be released on process exit. But it seems like such a thing doesn't
> exist.
I use a socket for that purpose as they are closed with the end of
the process.
(SysV-)Semaphores cannot do that as they are separate entities with a
life independent of processes.
Kind regards,
Bernd
--
Bernd Petrovitsch Email : bernd@petrovitsch.priv.at
There is NO CLOUD, just other people's computers. - FSFE
LUGA : http://www.luga.at
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Wislist for Linux from the mold linker's POV
2024-11-28 2:52 Wislist for Linux from the mold linker's POV Rui Ueyama
` (2 preceding siblings ...)
2024-12-04 10:42 ` Bernd Petrovitsch
@ 2024-12-04 10:43 ` Bernd Petrovitsch
3 siblings, 0 replies; 11+ messages in thread
From: Bernd Petrovitsch @ 2024-12-04 10:43 UTC (permalink / raw)
To: Rui Ueyama; +Cc: LKML
Hi all!
On 28.11.24 03:52, Rui Ueyama wrote:
[...]
> After all, we just want a system-wide semaphore that is guaranteed to
> be released on process exit. But it seems like such a thing doesn't
> exist.
I use a socket for that purpose as they are closed with the end of
the process.
(SysV-)Semaphores cannot do that as they are separate entities with a
life independent of processes.
Kind regards,
Bernd
--
mobile: +43 664 4416156 http://www.sysprog.at/
Embedded Linux Software Development, Consulting and Services
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-12-04 10:43 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-28 2:52 Wislist for Linux from the mold linker's POV Rui Ueyama
2024-11-28 17:41 ` Florian Weimer
2024-11-29 0:44 ` Rui Ueyama
2024-11-29 5:38 ` Niklas Hambüchen
2024-11-29 18:12 ` Theodore Ts'o
2024-11-30 15:36 ` Niklas Hambüchen
2024-11-29 7:17 ` наб
2024-11-29 7:25 ` Rui Ueyama
2024-11-29 7:37 ` наб
2024-12-04 10:42 ` Bernd Petrovitsch
2024-12-04 10:43 ` Bernd Petrovitsch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox