* [PATCH] Fix: buffer overflow during hvc_alloc().
@ 2020-04-05 20:40 andrew
2020-04-06 0:31 ` Andrew Donnellan
0 siblings, 1 reply; 4+ messages in thread
From: andrew @ 2020-04-05 20:40 UTC (permalink / raw)
To: virtualization; +Cc: gregkh, jslaby, linuxppc-dev, linux-kernel
From: Andrew Melnychenko <andrew@daynix.com>
If there is a lot(more then 16) of virtio-console devices
or virtio_console module is reloaded
- buffers 'vtermnos' and 'cons_ops' are overflowed.
In older kernels it overruns spinlock which leads to kernel freezing:
https://bugzilla.redhat.com/show_bug.cgi?id=1786239
Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
---
drivers/tty/hvc/hvc_console.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 27284a2dcd2b..436cc51c92c3 100644
--- a/drivers/tty/hvc/hvc_console.c
+++ b/drivers/tty/hvc/hvc_console.c
@@ -302,10 +302,6 @@ int hvc_instantiate(uint32_t vtermno, int index, const struct hv_ops *ops)
vtermnos[index] = vtermno;
cons_ops[index] = ops;
- /* reserve all indices up to and including this index */
- if (last_hvc < index)
- last_hvc = index;
-
/* check if we need to re-register the kernel console */
hvc_check_console(index);
@@ -960,13 +956,22 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data,
cons_ops[i] == hp->ops)
break;
- /* no matching slot, just use a counter */
- if (i >= MAX_NR_HVC_CONSOLES)
- i = ++last_hvc;
+ if (i >= MAX_NR_HVC_CONSOLES) {
+
+ /* find 'empty' slot for console */
+ for (i = 0; i < MAX_NR_HVC_CONSOLES && vtermnos[i] != -1; i++) {
+ }
+
+ /* no matching slot, just use a counter */
+ if (i == MAX_NR_HVC_CONSOLES)
+ i = ++last_hvc + MAX_NR_HVC_CONSOLES;
+ }
hp->index = i;
- cons_ops[i] = ops;
- vtermnos[i] = vtermno;
+ if (i < MAX_NR_HVC_CONSOLES) {
+ cons_ops[i] = ops;
+ vtermnos[i] = vtermno;
+ }
list_add_tail(&(hp->next), &hvc_structs);
mutex_unlock(&hvc_structs_mutex);
--
2.24.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] Fix: buffer overflow during hvc_alloc().
2020-04-05 20:40 [PATCH] Fix: buffer overflow during hvc_alloc() andrew
@ 2020-04-06 0:31 ` Andrew Donnellan
2020-04-06 8:05 ` Andrew Melnichenko
0 siblings, 1 reply; 4+ messages in thread
From: Andrew Donnellan @ 2020-04-06 0:31 UTC (permalink / raw)
To: andrew, virtualization; +Cc: gregkh, linuxppc-dev, linux-kernel, jslaby
On 6/4/20 6:40 am, andrew@daynix.com wrote:
> From: Andrew Melnychenko <andrew@daynix.com>
>
> If there is a lot(more then 16) of virtio-console devices
> or virtio_console module is reloaded
> - buffers 'vtermnos' and 'cons_ops' are overflowed.
> In older kernels it overruns spinlock which leads to kernel freezing:
> https://bugzilla.redhat.com/show_bug.cgi?id=1786239
>
This Bugzilla report isn't publicly accessible. Can you include a
relevant summary here and/or make the report publicly viewable?
If it does indeed lead to a kernel freeze, this should be tagged with a
Fixes: and a Cc: stable@vger.kernel.org.
--
Andrew Donnellan OzLabs, ADL Canberra
ajd@linux.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] Fix: buffer overflow during hvc_alloc().
2020-04-06 0:31 ` Andrew Donnellan
@ 2020-04-06 8:05 ` Andrew Melnichenko
2020-04-07 6:23 ` Andrew Donnellan
0 siblings, 1 reply; 4+ messages in thread
From: Andrew Melnichenko @ 2020-04-06 8:05 UTC (permalink / raw)
To: Andrew Donnellan; +Cc: gregkh, virtualization, linux-kernel, jslaby
[-- Attachment #1.1: Type: text/plain, Size: 9463 bytes --]
>
> Description of problem:
> Guest get 'Call Trace' when loading module "virtio_console" and unloading
> it frequently
>
>
> Version-Release number of selected component (if applicable):
> Guest
> kernel-4.18.0-167.el8.x86_64
> seabios-bin-1.11.1-4.module+el8.1.0+4066+0f1aadab.noarch
> # modinfo virtio_console
> filename: /lib/modules/4.18.0-
> 167.el8.x86_64/kernel/drivers/char/virtio_console.ko.xz
> license: GPL
> description: Virtio console driver
> rhelversion: 8.2
> srcversion: 55224090DD07750FAD75C9C
> alias: virtio:d00000003v*
> depends:
> intree: Y
> name: virtio_console
> vermagic: 4.18.0-167.el8.x86_64 SMP mod_unload modversions
> Host:
> qemu-kvm-4.2.0-2.scrmod+el8.2.0+5159+d8aa4d83.x86_64
> kernel-4.18.0-165.el8.x86_64
> seabios-bin-1.12.0-5.scrmod+el8.2.0+5159+d8aa4d83.noarch
>
>
>
> How reproducible: 100%
>
>
> Steps to Reproduce:
>
> 1. boot guest with command [1]
> 2. load and unload virtio_console inside guest with loop.sh
> # cat loop.sh
> while [ 1 ]
> do
> modprobe virtio_console
> lsmod | grep virt
> modprobe -r virtio_console
> lsmod | grep virt
> done
>
>
>
> Actual results:
> Guest reboot and can get vmcore-dmesg.txt file
>
>
> Expected results:
> Guest works well without error
>
>
> Additional info:
> The whole log will attach to the attachments.
>
> Call Trace:
> [ 22.974500] fuse: init (API version 7.31)
> [ 81.498208] ------------[ cut here ]------------
> [ 81.499263] pvqspinlock: lock 0xffffffff92080020 has corrupted value
> 0xc0774ca0!
> [ 81.501000] WARNING: CPU: 0 PID: 785 at
> kernel/locking/qspinlock_paravirt.h:500
> __pv_queued_spin_unlock_slowpath+0xc0/0xd0
> [ 81.503173] Modules linked in: virtio_console fuse xt_CHECKSUM
> ipt_MASQUERADE xt_conntrack ipt_REJECT nft_counter nf_nat_tftp nft_objref
> nf_conntrack_tftp tun bridge stp llc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6
> nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nf_tables_set nft_chain_nat_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6
> nf_nat_ipv6 nft_chain_route_ipv6 nft_chain_nat_ipv4 nf_conntrack_ipv4
> nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack nft_chain_route_ipv4
> ip6_tables nft_compat ip_set nf_tables nfnetlink sunrpc bochs_drm
> drm_vram_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt
> fb_sys_fops drm i2c_piix4 pcspkr crct10dif_pclmul crc32_pclmul joydev
> ghash_clmulni_intel ip_tables xfs libcrc32c sd_mod sg ata_generic ata_piix
> virtio_net libata crc32c_intel net_failover failover serio_raw virtio_scsi
> dm_mirror dm_region_hash dm_log dm_mod [last unloaded: virtio_console]
> [ 81.517019] CPU: 0 PID: 785 Comm: kworker/0:2 Kdump: loaded Not tainted
> 4.18.0-167.el8.x86_64 #1
> [ 81.518639] Hardware name: Red Hat KVM, BIOS
> 1.12.0-5.scrmod+el8.2.0+5159+d8aa4d83 04/01/2014
> [ 81.520205] Workqueue: events control_work_handler [virtio_console]
> [ 81.521354] RIP: 0010:__pv_queued_spin_unlock_slowpath+0xc0/0xd0
> [ 81.522450] Code: 07 00 48 63 7a 10 e8 bf 64 f5 ff 66 90 c3 8b 05 e6 cf
> d6 01 85 c0 74 01 c3 8b 17 48 89 fe 48 c7 c7 38 4b 29 91 e8 3a 6c fa ff
> <0f> 0b c3 0f 0b 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48
> [ 81.525830] RSP: 0018:ffffb51a01ffbd70 EFLAGS: 00010282
> [ 81.526798] RAX: 0000000000000000 RBX: 0000000000000010 RCX:
> 0000000000000000
> [ 81.528110] RDX: ffff9e66f1826480 RSI: ffff9e66f1816a08 RDI:
> ffff9e66f1816a08
> [ 81.529437] RBP: ffffffff9153ff10 R08: 000000000000026c R09:
> 0000000000000053
> [ 81.530732] R10: 0000000000000000 R11: ffffb51a01ffbc18 R12:
> ffff9e66cd682200
> [ 81.532133] R13: ffffffff9153ff10 R14: ffff9e6685569500 R15:
> ffff9e66cd682000
> [ 81.533442] FS: 0000000000000000(0000) GS:ffff9e66f1800000(0000)
> knlGS:0000000000000000
> [ 81.534914] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 81.535971] CR2: 00005624c55b14d0 CR3: 00000003a023c000 CR4:
> 00000000003406f0
> [ 81.537283] Call Trace:
> [ 81.537763]
> __raw_callee_save___pv_queued_spin_unlock_slowpath+0x11/0x20
> [ 81.539011] .slowpath+0x9/0xe
> [ 81.539585] hvc_alloc+0x25e/0x300
> [ 81.540237] init_port_console+0x28/0x100 [virtio_console]
> [ 81.541251] handle_control_message.constprop.27+0x1c4/0x310
> [virtio_console]
> [ 81.542546] control_work_handler+0x70/0x10c [virtio_console]
> [ 81.543601] process_one_work+0x1a7/0x3b0
> [ 81.544356] worker_thread+0x30/0x390
> [ 81.545025] ? create_worker+0x1a0/0x1a0
> [ 81.545749] kthread+0x112/0x130
> [ 81.546358] ? kthread_flush_work_fn+0x10/0x10
> [ 81.547183] ret_from_fork+0x22/0x40
> [ 81.547842] ---[ end trace aa97649bd16c8655 ]---
> [ 83.546539] general protection fault: 0000 [#1] SMP NOPTI
> [ 83.547422] CPU: 5 PID: 3225 Comm: modprobe Kdump: loaded Tainted: G
> W --------- - - 4.18.0-167.el8.x86_64 #1
> [ 83.549191] Hardware name: Red Hat KVM, BIOS
> 1.12.0-5.scrmod+el8.2.0+5159+d8aa4d83 04/01/2014
> [ 83.550544] RIP: 0010:__pv_queued_spin_lock_slowpath+0x19a/0x2a0
> [ 83.551504] Code: c4 c1 ea 12 41 be 01 00 00 00 4c 8d 6d 14 41 83 e4 03
> 8d 42 ff 49 c1 e4 05 48 98 49 81 c4 40 a5 02 00 4c 03 24 c5 60 48 34 91
> <49> 89 2c 24 b8 00 80 00 00 eb 15 84 c0 75 0a 41 0f b6 54 24 14 84
> [ 83.554449] RSP: 0018:ffffb51a0323fdb0 EFLAGS: 00010202
> [ 83.555290] RAX: 000000000000301c RBX: ffffffff92080020 RCX:
> 0000000000000001
> [ 83.556426] RDX: 000000000000301d RSI: 0000000000000000 RDI:
> 0000000000000000
> [ 83.557556] RBP: ffff9e66f196a540 R08: 000000000000028a R09:
> ffff9e66d2757788
> [ 83.558688] R10: 0000000000000000 R11: 0000000000000000 R12:
> 646e61725f770b07
> [ 83.559821] R13: ffff9e66f196a554 R14: 0000000000000001 R15:
> 0000000000180000
> [ 83.560958] FS: 00007fd5032e8740(0000) GS:ffff9e66f1940000(0000)
> knlGS:0000000000000000
> [ 83.562233] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 83.563149] CR2: 00007fd5022b0da0 CR3: 000000038c334000 CR4:
> 00000000003406e0
>
>
>
> Command[1]
> /usr/libexec/qemu-kvm \
> -S \
> -name 'avocado-vt-vm1' \
> -sandbox on \
> -machine pc \
> -nodefaults \
> -device VGA,bus=pci.0,addr=0x2 \
> -m 14336 \
> -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2 \
> -cpu 'EPYC',+kvm_pv_unhalt \
> -chardev
> socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_6ae2tn_f/monitor-qmpmonitor1-20191223-061943-73qpR3NF,server,nowait
> \
> -mon chardev=qmp_id_qmpmonitor1,mode=control \
> -chardev
> socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_6ae2tn_f/monitor-catch_monitor-20191223-061943-73qpR3NF,server,nowait
> \
> -mon chardev=qmp_id_catch_monitor,mode=control \
> -device pvpanic,ioport=0x505,id=idhlgw9I \
> -chardev
> socket,id=chardev_serial0,path=/var/tmp/avocado_6ae2tn_f/serial-serial0-20191223-061943-73qpR3NF,server,nowait
> \
> -device isa-serial,id=serial0,chardev=chardev_serial0 \
> -chardev
> socket,id=chardev_vc1,path=/var/tmp/avocado_6ae2tn_f/serial-vc1-20191223-061943-73qpR3NF,server,nowait
> \
> -device virtio-serial-pci,id=virtio_serial_pci0,bus=pci.0,addr=0x3 \
> -device
> virtserialport,id=vc1,name=vc1,chardev=chardev_vc1,bus=virtio_serial_pci0.0,nr=1
> \
> -chardev
> socket,id=chardev_vc2,path=/var/tmp/avocado_6ae2tn_f/serial-vc2-20191223-061943-73qpR3NF,server,nowait
> \
> -device virtio-serial-pci,id=virtio_serial_pci1,bus=pci.0,addr=0x4 \
> -device
> virtconsole,id=vc2,name=vc2,chardev=chardev_vc2,bus=virtio_serial_pci1.0,nr=1
> \
> -chardev
> socket,id=seabioslog_id_20191223-061943-73qpR3NF,path=/var/tmp/avocado_6ae2tn_f/seabios-20191223-061943-73qpR3NF,server,nowait
> \
> -device
> isa-debugcon,chardev=seabioslog_id_20191223-061943-73qpR3NF,iobase=0x402 \
> -device qemu-xhci,id=usb1,bus=pci.0,addr=0x5 \
> -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x6 \
> -drive
> id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2
> \
> -device scsi-hd,id=image1,drive=drive_image1 \
> -device
> virtio-net-pci,mac=9a:43:0d:1d:8d:d1,id=idmHPkFv,netdev=idTPtFdd,bus=pci.0,addr=0x7
> \
> -netdev tap,id=idTPtFdd,vhost=on \
> -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
> -vnc :0 \
> -rtc base=utc,clock=host,driftfix=slew \
> -boot menu=off,order=cdn,once=c,strict=off \
> -enable-kvm \
> -monitor stdio \
> -qmp tcp:0:4444,server,nowait \
On Mon, Apr 6, 2020 at 3:32 AM Andrew Donnellan <ajd@linux.ibm.com> wrote:
> On 6/4/20 6:40 am, andrew@daynix.com wrote:
> > From: Andrew Melnychenko <andrew@daynix.com>
> >
> > If there is a lot(more then 16) of virtio-console devices
> > or virtio_console module is reloaded
> > - buffers 'vtermnos' and 'cons_ops' are overflowed.
> > In older kernels it overruns spinlock which leads to kernel freezing:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1786239
> >
>
> This Bugzilla report isn't publicly accessible. Can you include a
> relevant summary here and/or make the report publicly viewable?
>
> If it does indeed lead to a kernel freeze, this should be tagged with a
> Fixes: and a Cc: stable@vger.kernel.org.
>
> --
> Andrew Donnellan OzLabs, ADL Canberra
> ajd@linux.ibm.com IBM Australia Limited
>
>
[-- Attachment #1.2: Type: text/html, Size: 10699 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] Fix: buffer overflow during hvc_alloc().
2020-04-06 8:05 ` Andrew Melnichenko
@ 2020-04-07 6:23 ` Andrew Donnellan
0 siblings, 0 replies; 4+ messages in thread
From: Andrew Donnellan @ 2020-04-07 6:23 UTC (permalink / raw)
To: Andrew Melnichenko; +Cc: gregkh, linux-kernel, jslaby, virtualization
On 6/4/20 6:05 pm, Andrew Melnichenko wrote:
>
> Steps to Reproduce:
>
> 1. boot guest with command [1]
> 2. load and unload virtio_console inside guest with loop.sh
> # cat loop.sh
> while [ 1 ]
> do
> modprobe virtio_console
> lsmod | grep virt
> modprobe -r virtio_console
> lsmod | grep virt
> done
>
>
>
> Actual results:
> Guest reboot and can get vmcore-dmesg.txt file
>
>
> Expected results:
> Guest works well without error
>
>
> Additional info:
> The whole log will attach to the attachments.
>
> Call Trace:
> [ 22.974500] fuse: init (API version 7.31)
> [ 81.498208] ------------[ cut here ]------------
> [ 81.499263] pvqspinlock: lock 0xffffffff92080020 has corrupted
> value 0xc0774ca0!
> [ 81.501000] WARNING: CPU: 0 PID: 785 at
> kernel/locking/qspinlock_paravirt.h:500
[snip]
Thanks!
You should include an appropriate excerpt from this - the WARNING
message and stack trace, and the steps to reproduce - in the commit
message of the patch.
--
Andrew Donnellan OzLabs, ADL Canberra
ajd@linux.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-04-07 6:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-05 20:40 [PATCH] Fix: buffer overflow during hvc_alloc() andrew
2020-04-06 0:31 ` Andrew Donnellan
2020-04-06 8:05 ` Andrew Melnichenko
2020-04-07 6:23 ` Andrew Donnellan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).