linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>,
	"Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
	"Juergen Gross" <jgross@suse.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Jens Axboe" <axboe@kernel.dk>,
	xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
Date: Wed, 1 Jun 2022 23:02:09 -0700	[thread overview]
Message-ID: <YphSYfdzy8kekhTZ@infradead.org> (raw)
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>

On Wed, Jun 01, 2022 at 03:53:41PM -0400, Jason Andryuk wrote:
> When a VBD is not fully created and then closed, the kernel can have a
> NULL pointer dereference:
> 
> The reproducer is trivial:
> 
> [user@dom0 ~]$ sudo xl block-attach work backend=sys-usb vdev=xvdi target=/dev/sdz
> [user@dom0 ~]$ xl block-list work
> Vdev  BE  handle state evt-ch ring-ref BE-path
> 51712 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51712
> 51728 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51728
> 51744 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51744
> 51760 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51760
> 51840 3   241    3     -1     -1       /local/domain/3/backend/vbd/241/51840
>                  ^ note state, the /dev/sdz doesn't exist in the backend
> 
> [user@dom0 ~]$ sudo xl block-detach work xvdi
> [user@dom0 ~]$ xl block-list work
> Vdev  BE  handle state evt-ch ring-ref BE-path
> work is an invalid domain identifier
> 
> And its console has:
> 
> BUG: kernel NULL pointer dereference, address: 0000000000000050
> PGD 80000000edebb067 P4D 80000000edebb067 PUD edec2067 PMD 0
> Oops: 0000 [#1] PREEMPT SMP PTI
> CPU: 1 PID: 52 Comm: xenwatch Not tainted 5.16.18-2.43.fc32.qubes.x86_64 #1
> RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
> Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
> RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
> RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
> RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
> R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
> R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
> FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
> Call Trace:
>  <TASK>
>  blkback_changed+0x95/0x137 [xen_blkfront]
>  ? read_reply+0x160/0x160
>  xenwatch_thread+0xc0/0x1a0
>  ? do_wait_intr_irq+0xa0/0xa0
>  kthread+0x16b/0x190
>  ? set_kthread_struct+0x40/0x40
>  ret_from_fork+0x22/0x30
>  </TASK>
> Modules linked in: snd_seq_dummy snd_hrtimer snd_seq snd_seq_device snd_timer snd soundcore ipt_REJECT nf_reject_ipv4 xt_state xt_conntrack nft_counter nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables nfnetlink intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel xen_netfront pcspkr xen_scsiback target_core_mod xen_netback xen_privcmd xen_gntdev xen_gntalloc xen_blkback xen_evtchn ipmi_devintf ipmi_msghandler fuse bpf_preload ip_tables overlay xen_blkfront
> CR2: 0000000000000050
> ---[ end trace 7bc9597fd06ae89d ]---
> RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
> Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
> RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
> RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
> RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
> R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
> R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
> FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
> Kernel panic - not syncing: Fatal exception
> Kernel Offset: disabled
> 
> info->rq and info->gd are only set in blkfront_connect(), which is
> called for state 4 (XenbusStateConnected).  Guard against using NULL
> variables in blkfront_closing() to avoid the issue.
> 
> The rest of blkfront_closing looks okay.  If info->nr_rings is 0, then
> for_each_rinfo won't do anything.
> 
> blkfront_remove also needs to check for non-NULL pointers before
> cleaning up the gendisk and request queue.
> 
> Fixes: 05d69d950d9d "xen-blkfront: sanitize the removal state machine"
> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Tis looks ok, but do we have anything that prevents races between
blkfront_connect, blkfront_closing and blkfront_remove?

  reply	other threads:[~2022-06-02  6:02 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-01 19:53 [PATCH] xen-blkfront: Handle NULL gendisk Jason Andryuk
2022-06-02  6:02 ` Christoph Hellwig [this message]
2022-06-02 12:22   ` Jason Andryuk
2022-06-02 12:36 ` Juergen Gross
2022-06-23 13:00 ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YphSYfdzy8kekhTZ@infradead.org \
    --to=hch@infradead.org \
    --cc=axboe@kernel.dk \
    --cc=boris.ostrovsky@oracle.com \
    --cc=jandryuk@gmail.com \
    --cc=jgross@suse.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marmarek@invisiblethingslab.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).