* Bad vmalloc address during BPF hooks unload
@ 2025-06-03 20:13 Andrey Grodzovsky
2025-06-03 21:54 ` Jiri Olsa
0 siblings, 1 reply; 5+ messages in thread
From: Andrey Grodzovsky @ 2025-06-03 20:13 UTC (permalink / raw)
To: bpf; +Cc: joe.kimpel, Mark Fontana
Hi, we observe bellow random warning occasionally during BPF hooks
unload, we only see it on rhel8 kernels ranging from 8.6-8.10 so it
might be something RHEL specific and not upstream issues, i still was
hoping to get some advise or clues from BPF experts here.
Thanks,Andrey
[ 5714.071613] WARNING: CPU: 0 PID: 20653 at mm/vmalloc.c:330
vmalloc_to_page+0x21e/0x230
[ 5714.079668] Modules linked in: nft_chain_nat ipt_MASQUERADE nf_nat
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 veth nft_counter ipt_REJECT
nf_reject_ipv4 nft_compat falcon_lsm_pinned_7413(E) binfmt_misc tcp_diag
udp_diag inet_diag nf_tables nfnetlink overlay intel_rapl_msr
intel_rapl_common amd_energy pcspkr i2c_piix4 xfs libcrc32c nvme_tcp(X)
nvme_fabrics sd_mod sg crct10dif_pclmul crc32_pclmul crc32c_intel
virtio_net ghash_clmulni_intel net_failover virtio_scsi failover
serio_raw nvme nvme_core t10_pi dm_mirror dm_region_hash dm_log dm_mod
fuse [last unloaded: falcon_nf_netcontain]
[ 5714.131372] Red Hat flags: eBPF/event eBPF/rawtrace
[ 5714.136351] CPU: 0 PID: 20653 Comm: falcon-sensor-b Kdump: loaded
Tainted: G E X --------- - - 4.18.0-477.10.1.el8_8.x86_64 #1
[ 5714.148964] Hardware name: Google Google Compute Engine/Google
Compute Engine, BIOS Google 05/07/2025
[ 5714.158283] RIP: 0010:vmalloc_to_page+0x21e/0x230
[ 5714.163086] Code: 28 f1 b0 00 48 81 e7 00 00 00 c0 e9 19 ff ff ff 48
8b 3d 55 5b 0d 01 48 81 e7 00 f0 ff ff 48 89 fa eb 8c 0f 0b e9 10 fe ff
ff <0f> 0b 31 c0 e9 f9 f0 b0 00 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00
[ 5714.181949] RSP: 0018:ffffad99017b3d10 EFLAGS: 00010293
[ 5714.187271] RAX: 0000000000000063 RBX: ffffdead05829d80 RCX:
0000000000000000
[ 5714.194500] RDX: 0000000000000000 RSI: ffffffffc2200049 RDI:
0000000000000000
[ 5714.201749] RBP: ffffffffc21ff049 R08: 0000000000000000 R09:
0000000000000001
[ 5714.208977] R10: ffff8b901bb403c0 R11: 0000000000000001 R12:
0000000000000001
[ 5714.216206] R13: ffffad99017b3d5f R14: ffffffffc2200049 R15:
ffff8b900a0bdd80
[ 5714.223438] FS: 00007ff766da4c00(0000) GS:ffff8b9137c00000(0000)
knlGS:0000000000000000
[ 5714.231623] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5714.237466] CR2: 0000603000261000 CR3: 00000001b3438004 CR4:
0000000000370ef0
[ 5714.244699] Call Trace:
[ 5714.247245] __text_poke+0x207/0x260
[ 5714.250926] text_poke_bp_batch+0x85/0x270
[ 5714.255122] ? bpf_trampoline_6442489046_0+0x49/0x1000
[ 5714.260361] text_poke_bp+0x44/0x70
[ 5714.263954] __bpf_arch_text_poke+0x1a2/0x1b0
[ 5714.268412] bpf_tramp_image_put+0x2b/0x60
[ 5714.272608] bpf_trampoline_update+0x205/0x440
[ 5714.277151] bpf_trampoline_unlink_prog+0x7a/0xc0
[ 5714.281954] bpf_tracing_link_release+0x16/0x40
[ 5714.286585] bpf_link_free+0x2b/0x50
[ 5714.290263] bpf_link_release+0x11/0x20
[ 5714.294195] __fput+0xbe/0x250
[ 5714.297348] task_work_run+0x8a/0xb0
[ 5714.301021] exit_to_usermode_loop+0xef/0x100
[ 5714.305477] do_syscall_64+0x19c/0x1b0
[ 5714.309324] entry_SYSCALL_64_after_hwframe+0x61/0xc6
[ 5714.314475] RIP: 0033:0x7ff766779b47
[ 5714.318164] Code: 12 b8 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 3b c3
66 90 53 89 fb 48 83 ec 10 e8 e4 fb ff ff 89 df 89 c2 b8 03 00 00 00 0f
05 <48> 3d 00 f0 ff ff 77 2b 89 d7 89 44 24 0c e8 26 fc ff ff 8b 44 24
[ 5714.337029] RSP: 002b:00007fff9058f3d0 EFLAGS: 00000293 ORIG_RAX:
0000000000000003
[ 5714.344695] RAX: 0000000000000000 RBX: 0000000000000275 RCX:
00007ff766779b47
[ 5714.351926] RDX: 0000000000000000 RSI: 00007ff766d870e0 RDI:
0000000000000275
[ 5714.359156] RBP: 00007fff9058f400 R08: 0000000000000020 R09:
0000000000001a8c
[ 5714.366387] R10: 00007fff9058ec48 R11: 0000000000000293 R12:
00007ff765fc4758
[ 5714.373618] R13: 0000000000000029 R14: 00007ff765fc9528 R15:
0000619000000a80
[ 5714.380851] ---[ end trace e6e6066ea7e090fa ]---
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Bad vmalloc address during BPF hooks unload
2025-06-03 20:13 Bad vmalloc address during BPF hooks unload Andrey Grodzovsky
@ 2025-06-03 21:54 ` Jiri Olsa
2025-06-04 14:47 ` [External] " Andrey Grodzovsky
0 siblings, 1 reply; 5+ messages in thread
From: Jiri Olsa @ 2025-06-03 21:54 UTC (permalink / raw)
To: Andrey Grodzovsky; +Cc: bpf, joe.kimpel, Mark Fontana, Viktor Malik
On Tue, Jun 03, 2025 at 04:13:18PM -0400, Andrey Grodzovsky wrote:
> Hi, we observe bellow random warning occasionally during BPF hooks unload,
> we only see it on rhel8 kernels ranging from 8.6-8.10 so it might be
> something RHEL specific and not upstream issues, i still was hoping to get
> some advise or clues from BPF experts here.
hi,
unless you reproduce on upstream or some stable kernel I'm afraid there's not
much that can be done in here
jirka
>
> Thanks,Andrey
>
> [ 5714.071613] WARNING: CPU: 0 PID: 20653 at mm/vmalloc.c:330
> vmalloc_to_page+0x21e/0x230
> [ 5714.079668] Modules linked in: nft_chain_nat ipt_MASQUERADE nf_nat
> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 veth nft_counter ipt_REJECT
> nf_reject_ipv4 nft_compat falcon_lsm_pinned_7413(E) binfmt_misc tcp_diag
> udp_diag inet_diag nf_tables nfnetlink overlay intel_rapl_msr
> intel_rapl_common amd_energy pcspkr i2c_piix4 xfs libcrc32c nvme_tcp(X)
> nvme_fabrics sd_mod sg crct10dif_pclmul crc32_pclmul crc32c_intel virtio_net
> ghash_clmulni_intel net_failover virtio_scsi failover serio_raw nvme
> nvme_core t10_pi dm_mirror dm_region_hash dm_log dm_mod fuse [last unloaded:
> falcon_nf_netcontain]
> [ 5714.131372] Red Hat flags: eBPF/event eBPF/rawtrace
> [ 5714.136351] CPU: 0 PID: 20653 Comm: falcon-sensor-b Kdump: loaded
> Tainted: G E X --------- - - 4.18.0-477.10.1.el8_8.x86_64 #1
> [ 5714.148964] Hardware name: Google Google Compute Engine/Google Compute
> Engine, BIOS Google 05/07/2025
> [ 5714.158283] RIP: 0010:vmalloc_to_page+0x21e/0x230
> [ 5714.163086] Code: 28 f1 b0 00 48 81 e7 00 00 00 c0 e9 19 ff ff ff 48 8b
> 3d 55 5b 0d 01 48 81 e7 00 f0 ff ff 48 89 fa eb 8c 0f 0b e9 10 fe ff ff <0f>
> 0b 31 c0 e9 f9 f0 b0 00 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00
> [ 5714.181949] RSP: 0018:ffffad99017b3d10 EFLAGS: 00010293
> [ 5714.187271] RAX: 0000000000000063 RBX: ffffdead05829d80 RCX:
> 0000000000000000
> [ 5714.194500] RDX: 0000000000000000 RSI: ffffffffc2200049 RDI:
> 0000000000000000
> [ 5714.201749] RBP: ffffffffc21ff049 R08: 0000000000000000 R09:
> 0000000000000001
> [ 5714.208977] R10: ffff8b901bb403c0 R11: 0000000000000001 R12:
> 0000000000000001
> [ 5714.216206] R13: ffffad99017b3d5f R14: ffffffffc2200049 R15:
> ffff8b900a0bdd80
> [ 5714.223438] FS: 00007ff766da4c00(0000) GS:ffff8b9137c00000(0000)
> knlGS:0000000000000000
> [ 5714.231623] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 5714.237466] CR2: 0000603000261000 CR3: 00000001b3438004 CR4:
> 0000000000370ef0
> [ 5714.244699] Call Trace:
> [ 5714.247245] __text_poke+0x207/0x260
> [ 5714.250926] text_poke_bp_batch+0x85/0x270
> [ 5714.255122] ? bpf_trampoline_6442489046_0+0x49/0x1000
> [ 5714.260361] text_poke_bp+0x44/0x70
> [ 5714.263954] __bpf_arch_text_poke+0x1a2/0x1b0
> [ 5714.268412] bpf_tramp_image_put+0x2b/0x60
> [ 5714.272608] bpf_trampoline_update+0x205/0x440
> [ 5714.277151] bpf_trampoline_unlink_prog+0x7a/0xc0
> [ 5714.281954] bpf_tracing_link_release+0x16/0x40
> [ 5714.286585] bpf_link_free+0x2b/0x50
> [ 5714.290263] bpf_link_release+0x11/0x20
> [ 5714.294195] __fput+0xbe/0x250
> [ 5714.297348] task_work_run+0x8a/0xb0
> [ 5714.301021] exit_to_usermode_loop+0xef/0x100
> [ 5714.305477] do_syscall_64+0x19c/0x1b0
> [ 5714.309324] entry_SYSCALL_64_after_hwframe+0x61/0xc6
> [ 5714.314475] RIP: 0033:0x7ff766779b47
> [ 5714.318164] Code: 12 b8 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 3b c3 66
> 90 53 89 fb 48 83 ec 10 e8 e4 fb ff ff 89 df 89 c2 b8 03 00 00 00 0f 05 <48>
> 3d 00 f0 ff ff 77 2b 89 d7 89 44 24 0c e8 26 fc ff ff 8b 44 24
> [ 5714.337029] RSP: 002b:00007fff9058f3d0 EFLAGS: 00000293 ORIG_RAX:
> 0000000000000003
> [ 5714.344695] RAX: 0000000000000000 RBX: 0000000000000275 RCX:
> 00007ff766779b47
> [ 5714.351926] RDX: 0000000000000000 RSI: 00007ff766d870e0 RDI:
> 0000000000000275
> [ 5714.359156] RBP: 00007fff9058f400 R08: 0000000000000020 R09:
> 0000000000001a8c
> [ 5714.366387] R10: 00007fff9058ec48 R11: 0000000000000293 R12:
> 00007ff765fc4758
> [ 5714.373618] R13: 0000000000000029 R14: 00007ff765fc9528 R15:
> 0000619000000a80
> [ 5714.380851] ---[ end trace e6e6066ea7e090fa ]---
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: Bad vmalloc address during BPF hooks unload
2025-06-03 21:54 ` Jiri Olsa
@ 2025-06-04 14:47 ` Andrey Grodzovsky
2025-06-04 19:37 ` Alexei Starovoitov
0 siblings, 1 reply; 5+ messages in thread
From: Andrey Grodzovsky @ 2025-06-04 14:47 UTC (permalink / raw)
To: Jiri Olsa; +Cc: bpf, joe.kimpel, Mark Fontana, Viktor Malik
On 6/3/25 17:54, Jiri Olsa wrote:
> On Tue, Jun 03, 2025 at 04:13:18PM -0400, Andrey Grodzovsky wrote:
>> Hi, we observe bellow random warning occasionally during BPF hooks unload,
>> we only see it on rhel8 kernels ranging from 8.6-8.10 so it might be
>> something RHEL specific and not upstream issues, i still was hoping to get
>> some advise or clues from BPF experts here.
> hi,
> unless you reproduce on upstream or some stable kernel I'm afraid there's not
> much that can be done in here
>
> jirka
Thanks Jiri, yes, i understand the limitations since this might be a
result of some
RHEL kernel tree specific bad patches cherry-piking/merge from upstream
into their own trees. I was
just hopping that this rings any bells to anyone in the E-BPF community
as it turns
to be really hard to repro and hence also to bisect.
Thanks,
Andrey.
>
>
>> Thanks,Andrey
>>
>> [ 5714.071613] WARNING: CPU: 0 PID: 20653 at mm/vmalloc.c:330
>> vmalloc_to_page+0x21e/0x230
>> [ 5714.079668] Modules linked in: nft_chain_nat ipt_MASQUERADE nf_nat
>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 veth nft_counter ipt_REJECT
>> nf_reject_ipv4 nft_compat falcon_lsm_pinned_7413(E) binfmt_misc tcp_diag
>> udp_diag inet_diag nf_tables nfnetlink overlay intel_rapl_msr
>> intel_rapl_common amd_energy pcspkr i2c_piix4 xfs libcrc32c nvme_tcp(X)
>> nvme_fabrics sd_mod sg crct10dif_pclmul crc32_pclmul crc32c_intel virtio_net
>> ghash_clmulni_intel net_failover virtio_scsi failover serio_raw nvme
>> nvme_core t10_pi dm_mirror dm_region_hash dm_log dm_mod fuse [last unloaded:
>> falcon_nf_netcontain]
>> [ 5714.131372] Red Hat flags: eBPF/event eBPF/rawtrace
>> [ 5714.136351] CPU: 0 PID: 20653 Comm: falcon-sensor-b Kdump: loaded
>> Tainted: G E X --------- - - 4.18.0-477.10.1.el8_8.x86_64 #1
>> [ 5714.148964] Hardware name: Google Google Compute Engine/Google Compute
>> Engine, BIOS Google 05/07/2025
>> [ 5714.158283] RIP: 0010:vmalloc_to_page+0x21e/0x230
>> [ 5714.163086] Code: 28 f1 b0 00 48 81 e7 00 00 00 c0 e9 19 ff ff ff 48 8b
>> 3d 55 5b 0d 01 48 81 e7 00 f0 ff ff 48 89 fa eb 8c 0f 0b e9 10 fe ff ff <0f>
>> 0b 31 c0 e9 f9 f0 b0 00 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00
>> [ 5714.181949] RSP: 0018:ffffad99017b3d10 EFLAGS: 00010293
>> [ 5714.187271] RAX: 0000000000000063 RBX: ffffdead05829d80 RCX:
>> 0000000000000000
>> [ 5714.194500] RDX: 0000000000000000 RSI: ffffffffc2200049 RDI:
>> 0000000000000000
>> [ 5714.201749] RBP: ffffffffc21ff049 R08: 0000000000000000 R09:
>> 0000000000000001
>> [ 5714.208977] R10: ffff8b901bb403c0 R11: 0000000000000001 R12:
>> 0000000000000001
>> [ 5714.216206] R13: ffffad99017b3d5f R14: ffffffffc2200049 R15:
>> ffff8b900a0bdd80
>> [ 5714.223438] FS: 00007ff766da4c00(0000) GS:ffff8b9137c00000(0000)
>> knlGS:0000000000000000
>> [ 5714.231623] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 5714.237466] CR2: 0000603000261000 CR3: 00000001b3438004 CR4:
>> 0000000000370ef0
>> [ 5714.244699] Call Trace:
>> [ 5714.247245] __text_poke+0x207/0x260
>> [ 5714.250926] text_poke_bp_batch+0x85/0x270
>> [ 5714.255122] ? bpf_trampoline_6442489046_0+0x49/0x1000
>> [ 5714.260361] text_poke_bp+0x44/0x70
>> [ 5714.263954] __bpf_arch_text_poke+0x1a2/0x1b0
>> [ 5714.268412] bpf_tramp_image_put+0x2b/0x60
>> [ 5714.272608] bpf_trampoline_update+0x205/0x440
>> [ 5714.277151] bpf_trampoline_unlink_prog+0x7a/0xc0
>> [ 5714.281954] bpf_tracing_link_release+0x16/0x40
>> [ 5714.286585] bpf_link_free+0x2b/0x50
>> [ 5714.290263] bpf_link_release+0x11/0x20
>> [ 5714.294195] __fput+0xbe/0x250
>> [ 5714.297348] task_work_run+0x8a/0xb0
>> [ 5714.301021] exit_to_usermode_loop+0xef/0x100
>> [ 5714.305477] do_syscall_64+0x19c/0x1b0
>> [ 5714.309324] entry_SYSCALL_64_after_hwframe+0x61/0xc6
>> [ 5714.314475] RIP: 0033:0x7ff766779b47
>> [ 5714.318164] Code: 12 b8 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 3b c3 66
>> 90 53 89 fb 48 83 ec 10 e8 e4 fb ff ff 89 df 89 c2 b8 03 00 00 00 0f 05 <48>
>> 3d 00 f0 ff ff 77 2b 89 d7 89 44 24 0c e8 26 fc ff ff 8b 44 24
>> [ 5714.337029] RSP: 002b:00007fff9058f3d0 EFLAGS: 00000293 ORIG_RAX:
>> 0000000000000003
>> [ 5714.344695] RAX: 0000000000000000 RBX: 0000000000000275 RCX:
>> 00007ff766779b47
>> [ 5714.351926] RDX: 0000000000000000 RSI: 00007ff766d870e0 RDI:
>> 0000000000000275
>> [ 5714.359156] RBP: 00007fff9058f400 R08: 0000000000000020 R09:
>> 0000000000001a8c
>> [ 5714.366387] R10: 00007fff9058ec48 R11: 0000000000000293 R12:
>> 00007ff765fc4758
>> [ 5714.373618] R13: 0000000000000029 R14: 00007ff765fc9528 R15:
>> 0000619000000a80
>> [ 5714.380851] ---[ end trace e6e6066ea7e090fa ]---
>>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: Bad vmalloc address during BPF hooks unload
2025-06-04 14:47 ` [External] " Andrey Grodzovsky
@ 2025-06-04 19:37 ` Alexei Starovoitov
2025-06-04 20:28 ` Andrey Grodzovsky
0 siblings, 1 reply; 5+ messages in thread
From: Alexei Starovoitov @ 2025-06-04 19:37 UTC (permalink / raw)
To: Andrey Grodzovsky; +Cc: Jiri Olsa, bpf, joe.kimpel, Mark Fontana, Viktor Malik
On Wed, Jun 4, 2025 at 7:49 AM Andrey Grodzovsky
<andrey.grodzovsky@crowdstrike.com> wrote:
>
> On 6/3/25 17:54, Jiri Olsa wrote:
> > On Tue, Jun 03, 2025 at 04:13:18PM -0400, Andrey Grodzovsky wrote:
> >> Hi, we observe bellow random warning occasionally during BPF hooks unload,
> >> we only see it on rhel8 kernels ranging from 8.6-8.10 so it might be
> >> something RHEL specific and not upstream issues, i still was hoping to get
> >> some advise or clues from BPF experts here.
> > hi,
> > unless you reproduce on upstream or some stable kernel I'm afraid there's not
> > much that can be done in here
> >
> > jirka
>
>
> Thanks Jiri, yes, i understand the limitations since this might be a
> result of some
> RHEL kernel tree specific bad patches cherry-piking/merge from upstream
> into their own trees. I was
> just hopping that this rings any bells to anyone in the E-BPF community
> as it turns
> to be really hard to repro and hence also to bisect.
I don't remember seeing splat like this.
Also mm/vmalloc.c:330 tells us nothing.
It's not clear what vmalloc_to_page() is complaining about.
I'm guessing that it's not a vmalloc address ?
Which would mean that im->ip_after_call points somewhere wrong.
And why would that be sporadic is anybody's guess.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: Bad vmalloc address during BPF hooks unload
2025-06-04 19:37 ` Alexei Starovoitov
@ 2025-06-04 20:28 ` Andrey Grodzovsky
0 siblings, 0 replies; 5+ messages in thread
From: Andrey Grodzovsky @ 2025-06-04 20:28 UTC (permalink / raw)
To: Alexei Starovoitov; +Cc: Jiri Olsa, bpf, joe.kimpel, Mark Fontana, Viktor Malik
On 6/4/25 15:37, Alexei Starovoitov wrote:
> On Wed, Jun 4, 2025 at 7:49 AM Andrey Grodzovsky
> <andrey.grodzovsky@crowdstrike.com> wrote:
>> On 6/3/25 17:54, Jiri Olsa wrote:
>>> On Tue, Jun 03, 2025 at 04:13:18PM -0400, Andrey Grodzovsky wrote:
>>>> Hi, we observe bellow random warning occasionally during BPF hooks unload,
>>>> we only see it on rhel8 kernels ranging from 8.6-8.10 so it might be
>>>> something RHEL specific and not upstream issues, i still was hoping to get
>>>> some advise or clues from BPF experts here.
>>> hi,
>>> unless you reproduce on upstream or some stable kernel I'm afraid there's not
>>> much that can be done in here
>>>
>>> jirka
>>
>> Thanks Jiri, yes, i understand the limitations since this might be a
>> result of some
>> RHEL kernel tree specific bad patches cherry-piking/merge from upstream
>> into their own trees. I was
>> just hopping that this rings any bells to anyone in the E-BPF community
>> as it turns
>> to be really hard to repro and hence also to bisect.
> I don't remember seeing splat like this.
>
> Also mm/vmalloc.c:330 tells us nothing.
> It's not clear what vmalloc_to_page() is complaining about.
> I'm guessing that it's not a vmalloc address ?
From looking at the relevant RHEL kernel source tree i see that
mm/vmalloc.c:330 maps to WARN_ON_ONCE(pmd_bad(*pmd)); So it passed the
pud_bad check right before that but failed on this one. The RHEL
function is identical to the upstream staging v4.18/source/mm/vmalloc.c
- vmalloc_to_page()
In any case, thanks for your advise and support, I will try to follow up
with RHEL kernel team and possibly
Victor here from Redhat can give me some pointer who to approach for
this from Redhat ?
Andrey
> Which would mean that im->ip_after_call points somewhere wrong.
> And why would that be sporadic is anybody's guess.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-06-04 20:29 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-03 20:13 Bad vmalloc address during BPF hooks unload Andrey Grodzovsky
2025-06-03 21:54 ` Jiri Olsa
2025-06-04 14:47 ` [External] " Andrey Grodzovsky
2025-06-04 19:37 ` Alexei Starovoitov
2025-06-04 20:28 ` Andrey Grodzovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).