qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] aio context ownership during bdrv_close()
@ 2019-04-26 12:24 Anton Kuchin
  2019-04-26 12:24 ` Anton Kuchin
  2019-04-29 10:38 ` Kevin Wolf
  0 siblings, 2 replies; 6+ messages in thread
From: Anton Kuchin @ 2019-04-26 12:24 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, Max Reitz, Kevin Wolf,
	yc-core (рассылка)

I can't figure out ownership of aio context during bdrv_close().

As far as I understand bdrv_unref() shold be called with acquired aio 
context to prevent concurrent operations (at least most usages in 
blockdev.c explicitly acquire and release context, but not all).

But if refcount reaches zero and bs is going to be deleted in 
bdrv_close() we need to ensure that drain is finished data is flushed 
and there are no more pending coroutines and bottomhalves, so drain and 
flush functions can enter coroutine and perform yield in several places. 
As a result control returns to coroutine caller that will release aio 
context and when completion bh will continue cleanup process it will be 
executed without ownership of context. Is this a valid situation?

Moreover if yield happens bs that is being deleted has zero refcount but 
is still present in lists graph_bdrv_states and all_bdrv_states and can 
be accidentally accessed. Shouldn't we remove it from these lists ASAP 
when deletion process starts as we do from monitor_bdrv_states?

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-05-06 13:58 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-26 12:24 [Qemu-devel] aio context ownership during bdrv_close() Anton Kuchin
2019-04-26 12:24 ` Anton Kuchin
2019-04-29 10:38 ` Kevin Wolf
2019-04-29 10:38   ` Kevin Wolf
2019-05-06 12:47   ` Anton Kuchin
2019-05-06 13:57     ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).