From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=40156 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Otat7-000053-8z for qemu-devel@nongnu.org; Thu, 09 Sep 2010 02:45:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Otat6-0006A9-5j for qemu-devel@nongnu.org; Thu, 09 Sep 2010 02:45:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46427) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Otat5-00069z-Vj for qemu-devel@nongnu.org; Thu, 09 Sep 2010 02:45:36 -0400 Message-ID: <4C888287.8020209@redhat.com> Date: Thu, 09 Sep 2010 09:45:27 +0300 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format References: <1283767478-16740-1-git-send-email-stefanha@linux.vnet.ibm.com> <4C84E738.3020802@codemonkey.ws> <4C865187.6090508@redhat.com> <4C865CFE.7010508@codemonkey.ws> <4C8663C4.1090508@redhat.com> <4C866773.2030103@codemonkey.ws> <4C86BC6B.5010809@codemonkey.ws> <4C874812.9090807@redhat.com> <4C87860A.3060904@codemonkey.ws> In-Reply-To: <4C87860A.3060904@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Kevin Wolf , Stefan Hajnoczi , qemu-devel@nongnu.org On 09/08/2010 03:48 PM, Anthony Liguori wrote: > On 09/08/2010 03:23 AM, Avi Kivity wrote: >> On 09/08/2010 01:27 AM, Anthony Liguori wrote: >>>> FWIW, L2s are 256K at the moment and with a two level table, it can >>>> support 5PB of data. >>> >>> >>> I clearly suck at basic math today. The image supports 64TB today. >>> Dropping to 128K tables would reduce it to 16TB and 64k tables would >>> be 4TB. >> >> Maybe we should do three levels then. Some users are bound to >> complain about 64TB. > > That's just the default size. The table size and cluster sizes are > configurable. Without changing the cluster size, the image can > support up to 1PB. Loading very large L2 tables on demand will result in very long latencies. Increasing cluster size will result in very long first write latencies. Adding an extra level results in an extra random write every 4TB. >> >>> Today, we only need to sync() when we first allocate an L2 entry >>> (because their locations never change). From a performance >>> perspective, it's the difference between an fsync() every 64k vs. >>> every 2GB. >> >> Yup. From a correctness perspective, it's the difference between a >> corrupted filesystem on almost every crash and a corrupted filesystem >> in some very rare cases. > > > I'm not sure I understand you're corruption comment. Are you claiming > that without checksumming, you'll often get corruption or are you > claiming that without checksums, if you don't sync metadata updates > you'll get corruption? No, I'm claiming that with checksums but without allocate-on-write you will have frequent (detected) data loss after power failures. Checksums need to go hand-in-hand with allocate-on-write (which happens to be the principle underlying zfs and btrfs). > > qed is very careful about ensuring that we don't need to do syncs and > we don't get corruption because of data loss. I don't necessarily buy > your checksumming argument. The requirement for checksumming comes from a different place. For decades we've enjoyed very low undetected bit error rates. However the actual amount of data is increasing to the point that it makes an undetectable bit error likely, just by throwing a huge amount of bits at storage. Write ordering doesn't address this issue. Virtualization is one of the uses where you have a huge number of bits. btrfs addresses this, but if you have (working) btrfs you don't need qed. Another problem is nfs; TCP and UDP checksums are incredibly weak and it is easy for a failure to bypass them. Ethernet CRCs are better, but they only work if the error is introduced after the CRC is taken and before it is verified. >> >> Well, if we introduce a minimal format, we need to make sure it isn't >> too minimal. >> >> I'm still not sold on the idea. What we're doing now is pushing the >> qcow2 complexity to users. We don't have to worry about refcounts >> now, but users have to worry whether they're the machine they're >> copying the image to supports qed or not. >> >> The performance problems with qcow2 are solvable. If we preallocate >> clusters, the performance characteristics become essentially the same >> as qed. > > By creating two code paths within qcow2. You're creating two code paths for users. > It's not just the reference counts, it's the lack of guaranteed > alignment, compression, and some of the other poor decisions in the > format. > > If you have two code paths in qcow2, you have non-deterministic > performance because users that do reasonable things with their images > will end up getting catastrophically bad performance. We can address that in the tools. "By enabling compression, you may reduce performance for multithreaded workloads. Abort/Retry/Ignore?" > > A new format doesn't introduce much additional complexity. We provide > image conversion tool and we can almost certainly provide an in-place > conversion tool that makes the process very fast. It requires users to make a decision. By the time qed is ready for mass deployment, 1-2 years will have passed. How many qcow2 images will be in the wild then? How much scheduled downtime will be needed? How much user confusion will be caused? Virtualization is about compatibility. In-guest compatibility first, but keeping the external environment stable is also important. We really need to exhaust the possibilities with qcow2 before giving up on it. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.