From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KreiR-000669-Eu for qemu-devel@nongnu.org; Sun, 19 Oct 2008 16:17:31 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KreiR-00065q-0N for qemu-devel@nongnu.org; Sun, 19 Oct 2008 16:17:31 -0400 Received: from [199.232.76.173] (port=60172 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KreiQ-00065k-Na for qemu-devel@nongnu.org; Sun, 19 Oct 2008 16:17:30 -0400 Received: from mx2.redhat.com ([66.187.237.31]:48474) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KreiQ-0003qF-5W for qemu-devel@nongnu.org; Sun, 19 Oct 2008 16:17:30 -0400 Message-ID: <48FB95AB.6090402@redhat.com> Date: Sun, 19 Oct 2008 22:16:43 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU References: <48EF1D55.7060307@redhat.com> <48F0E83E.2000907@redhat.com> <48F10DFD.40505@codemonkey.ws> <48F1CD76.2000203@redhat.com> <20081017132040.GK19428@kernel.dk> <48FAF751.8010806@redhat.com> <20081019181026.GU19428@kernel.dk> <48FB7B7A.4050008@redhat.com> <20081019183642.GV19428@kernel.dk> <48FB865B.60906@redhat.com> <20081019193024.GX19428@kernel.dk> In-Reply-To: <20081019193024.GX19428@kernel.dk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jens Axboe Cc: Chris Wright , Mark McLoughlin , kvm-devel , Laurent Vivier , qemu-devel@nongnu.org, Ryan Harper Jens Axboe wrote: >> (it seems I can't turn off the write cache even without losing my data: >> > Use hdparm, it's an ATA drive even if Linux currently uses the scsi > layer for it. Or use sysfs, there's a "cache_type" attribute in the scsi > disk sysfs directory. > Ok. It's moot anyway. >> "Policy" doesn't mean you shouldn't choose good defaults. >> > > Changing the hardware settings for this kind of behaviour IS most > certainly policy. > Leaving bad hardware settings is also policy. But in light of FUA, the SCSI write cache is not a bad thing, so we should definitely leave it on. >> I guess this is the crux. According to my understanding, you shouldn't >> see such a horrible drop, unless the application does synchronous writes >> explicitly, in which case it is probably worried about data safety. >> > > Then you need to adjust your understanding, because you definitely will > see a big drop in performance. > > Can you explain why? This is interesting. >>> O_DIRECT should just use FUA writes, there are safe with write back >>> caching. I'm actually testing such a change just to gauge the >>> performance impact. >>> >>> >> You mean, this is not in mainline yet? >> > > It isn't. > What is the time frame for this? 2.6.29? >> Some googling shows that Windows XP introduced FUA for O_DIRECT and >> metadata writes as well. >> > > There's a lot of other background information to understand to gauge the > impact of using eg FUA for O_DIRECT in Linux as well. MS basically wrote > the FUA for ATA proposal, and the original usage pattern (as far as I > remember) was indeed meta data. Hence it also imposes a priority boost > in most (all?) drive firmwares, since it's deemed important. So just > using FUA vs non-FUA is likely to impact performance of other workloads > in fairly unknown ways. FUA on non-queuing drives will also likely suck > for performance, since you're basically going to be blowing a drive rev > for each IO. And that hurts. > Let's assume queueing drives, since these are fairly common these days. So qemu issuing O_DIRECT which turns into FUA writes is safe but suboptimal. Has there been talk about exposing the difference between FUA writes and cached writes to userspace? What about barriers? With a rich enough userspace interface, qemu can communicate the intentions of the guest and not force the kernel to make a performance/correctness tradeoff. >> >> What about the users who aren't on qemu-devel? >> > > It may be news to you, but it has been debated on lkml in the past as > well. Not even that long ago, and I'd be surprised of lwn didn't run > some article on it as well. Let's postulate the existence of a user that doesn't read lkml or even lwn. > But I agree it's important information, but > realize that until just recently most people didn't really consider it a > likely scenario in practice... > > I wrote and committed the original barrier implementation in Linux in > 2001, and just this year XFS made it a default mount option. After the > recent debacle on this on lkml, ext4 made it the default as well. > > So let me turn it around a bit - if this issue really did hit lots of > people out there in real life, don't you think there would have been > more noise about this and we would have made this the default years ago? > So while we both agree it's a risk, it's not a huuuge risk... > I agree, not a huge risk. I guess compared to the rest of the suckiness involved (took a long while just to get journalling), this is really a minor issue. It's interesting though that Windows supported this in 2001, seven years ago, so at least they considered it important. I guess I'm sensitive to this because in my filesystemy past QA would jerk out data and power cables while running various tests and act surprised whenever data was lost. So I'm allergic to data loss. With qemu (at least when used with a hypervisor) we have to be extra safe since we have no idea what workload is running and how critical data safety is. Well, we have hints (whether FUA is set or not) when using SCSI, but right now we don't have a way of communicating these hints to the kernel. One important takeaway is to find out whether virtio-blk supports FUA, and if not, add it. >> However, with your FUA change, they should be safe. >> > > Yes, that would make O_DIRECT safe always. Except when it falls back to > buffered IO, woops... > > Woops. >> Any write latency is buffered by the kernel. Write speed is main memory >> speed. Disk speed only bubbles up when memory is tight. >> > > That's a nice theory, in practice that is completely wrong. You end up > waiting on writes for LOTS of other reasons! > > Journal commits? Can you elaborate? In the filesystem I worked on, one would never wait on a write to disk unless memory was full. Even synchronous writes were serviced immediately, since the system had a battery-backed replicated cache. I guess the situation with Linux filesystems is different. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.