From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rusty Russell Subject: Re: kernel BUG at drivers/block/virtio_blk.c:172! Date: Mon, 10 Nov 2014 20:29:50 +1030 Message-ID: <87bnofnzop.fsf@rustcorp.com.au> References: <20141107080416.0837a88c@tlielax.poochiereds.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20141107080416.0837a88c@tlielax.poochiereds.net> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Jeff Layton , "Michael S. Tsirkin" , Dave Chinner , Jens Axboe Cc: xfs@oss.sgi.com, virtualization@lists.linux-foundation.org List-Id: virtualization@lists.linuxfoundation.org Jeff Layton writes: > In the latest Fedora rawhide kernel in the repos, I'm seeing the > following oops when mounting xfs. rc2-ish kernels seem to be fine: > > [ 64.669633] ------------[ cut here ]------------ > [ 64.670008] kernel BUG at drivers/block/virtio_blk.c:172! Hmm, that's: BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems); But during our probe routine we said: /* We can handle whatever the host told us to handle. */ blk_queue_max_segments(q, vblk->sg_elems-2); Jens? Thanks, Rusty. > [ 64.670008] invalid opcode: 0000 [#1] SMP > [ 64.670008] Modules linked in: xfs libcrc32c snd_hda_codec_generic snd_hda_intel snd_hda_controller snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm ppdev snd_timer snd virtio_net virtio_balloon soundcore serio_raw parport_pc virtio_console pvpanic parport i2c_piix4 nfsd auth_rpcgss nfs_acl lockd grace sunrpc qxl virtio_blk drm_kms_helper ttm drm ata_generic virtio_pci virtio_ring virtio pata_acpi > [ 64.670008] CPU: 1 PID: 705 Comm: mount Not tainted 3.18.0-0.rc3.git2.1.fc22.x86_64 #1 > [ 64.670008] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 > [ 64.670008] task: ffff8800d94a4ec0 ti: ffff8800d9f38000 task.ti: ffff8800d9f38000 > [ 64.670008] RIP: 0010:[] [] virtio_queue_rq+0x290/0x2a0 [virtio_blk] > [ 64.670008] RSP: 0018:ffff8800d9f3b778 EFLAGS: 00010202 > [ 64.670008] RAX: 0000000000000082 RBX: ffff8800d8375700 RCX: dead000000200200 > [ 64.670008] RDX: 0000000000000001 RSI: ffff8800d8375700 RDI: ffff8800d82c4c00 > [ 64.670008] RBP: ffff8800d9f3b7b8 R08: ffff8800d8375700 R09: 0000000000000001 > [ 64.670008] R10: 0000000000000001 R11: 0000000000000004 R12: ffff8800d9f3b7e0 > [ 64.670008] R13: ffff8800d82c4c00 R14: ffff880118629200 R15: 0000000000000000 > [ 64.670008] FS: 00007f5c64dfd840(0000) GS:ffff88011b000000(0000) knlGS:0000000000000000 > [ 64.670008] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b > [ 64.670008] CR2: 00007fffe6458fb8 CR3: 00000000d06d3000 CR4: 00000000000006e0 > [ 64.670008] Stack: > [ 64.670008] ffff880000000001 ffff8800d8375870 0000000000000001 ffff8800d82c4c00 > [ 64.670008] ffff8800d9f3b7e0 0000000000000000 ffff8800d8375700 ffff8800d82c4c48 > [ 64.670008] ffff8800d9f3b828 ffffffff813ec258 ffff8800d82c8000 0000000000000001 > [ 64.670008] Call Trace: > [ 64.670008] [] __blk_mq_run_hw_queue+0x1c8/0x330 > [ 64.670008] [] blk_mq_run_hw_queue+0x70/0x90 > [ 64.670008] [] blk_sq_make_request+0x24d/0x5c0 > [ 64.670008] [] generic_make_request+0xf8/0x150 > [ 64.670008] [] submit_bio+0x78/0x190 > [ 64.670008] [] _xfs_buf_ioapply+0x2be/0x5f0 [xfs] > [ 64.670008] [] ? xlog_bread_noalign+0xa8/0xe0 [xfs] > [ 64.670008] [] xfs_buf_submit_wait+0x91/0x840 [xfs] > [ 64.670008] [] xlog_bread_noalign+0xa8/0xe0 [xfs] > [ 64.670008] [] xlog_bread+0x27/0x60 [xfs] > [ 64.670008] [] xlog_find_verify_cycle+0xf3/0x1b0 [xfs] > [ 64.670008] [] xlog_find_head+0x2f5/0x3e0 [xfs] > [ 64.670008] [] xlog_find_tail+0x3c/0x410 [xfs] > [ 64.670008] [] xlog_recover+0x2d/0x120 [xfs] > [ 64.670008] [] ? xfs_trans_ail_init+0xcb/0x100 [xfs] > [ 64.670008] [] xfs_log_mount+0xdd/0x2c0 [xfs] > [ 64.670008] [] xfs_mountfs+0x514/0x9c0 [xfs] > [ 64.670008] [] ? xfs_mru_cache_create+0x18d/0x1f0 [xfs] > [ 64.670008] [] xfs_fs_fill_super+0x330/0x3b0 [xfs] > [ 64.670008] [] mount_bdev+0x1bc/0x1f0 > [ 64.670008] [] ? xfs_parseargs+0xbe0/0xbe0 [xfs] > [ 64.670008] [] xfs_fs_mount+0x15/0x20 [xfs] > [ 64.670008] [] mount_fs+0x38/0x1c0 > [ 64.670008] [] ? __alloc_percpu+0x15/0x20 > [ 64.670008] [] vfs_kern_mount+0x68/0x160 > [ 64.670008] [] do_mount+0x22c/0xc20 > [ 64.670008] [] ? might_fault+0x5e/0xc0 > [ 64.670008] [] ? memdup_user+0x4b/0x90 > [ 64.670008] [] SyS_mount+0x9e/0x100 > [ 64.670008] [] system_call_fastpath+0x12/0x17 > [ 64.670008] Code: 00 00 c7 86 78 01 00 00 02 00 00 00 48 c7 86 80 01 00 00 00 00 00 00 89 86 7c 01 00 00 e9 02 fe ff ff 66 0f 1f 84 00 00 00 00 00 <0f> 0b 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 > [ 64.670008] RIP [] virtio_queue_rq+0x290/0x2a0 [virtio_blk] > [ 64.670008] RSP > [ 64.715347] ---[ end trace c0ff4a0f2fb21f7f ]--- > > It's reliably reproducible and I don't see this oops when I convert the > same block device to ext4 and mount it. In this setup, the KVM guest > has a virtio block device that has a LVM2 PV on it with an LV on it > that contains the filesystem. > > Let me know if you need any other info to chase this down. > > Thanks! > -- > Jeff Layton