public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* REQ_FLUSH, REQ_FUA and open/close of block devices
@ 2011-05-19 15:06 Alex Bligh
  2011-05-20 12:20 ` Christoph Hellwig
  0 siblings, 1 reply; 9+ messages in thread
From: Alex Bligh @ 2011-05-19 15:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alex Bligh

I am doing some work on making REQ_FLUSH and REQ_FUA work with block devices
and have some patches that make them perform as expected with nbd if
an nbd device is mounted (e.g. -t ext3, -odata-journal,barriers=1), and
I see the relevant REQ_FLUSH and REQ_FUA appearing much as expected.

However, if I do a straight dd to the device (which generates an open()
and a close()), I see no barrier activity at all (i.e. no REQ_FLUSH and
no REQ_FUA). It is surprising to me that a close() on a raw device does
not generate a REQ_FLUSH. I cannot imagine it is a performance overhead.

I would have thought this would useful anyway (if I've written
to a raw device I'd rather expect it to hit it when I do the close()),
but my specific application is ensuring cache coherency on live migrate
of virtual servers: if migrating from node A to node B, then when the
hypervisor closes the block device on node A, I want to be sure that any
locally cached write data is written to the remote disk before it
unfreezes node B.

Should a close() of a dirty block device result in a REQ_FLUSH?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2011-05-22 16:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-19 15:06 REQ_FLUSH, REQ_FUA and open/close of block devices Alex Bligh
2011-05-20 12:20 ` Christoph Hellwig
2011-05-21  8:42   ` Alex Bligh
2011-05-22 10:44     ` Christoph Hellwig
2011-05-22 11:17       ` Alex Bligh
2011-05-22 11:26         ` Christoph Hellwig
2011-05-22 12:00           ` Alex Bligh
2011-05-22 12:04             ` Christoph Hellwig
2011-05-22 16:56       ` Jeff Garzik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox