From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39175) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XKOus-00023d-AR for qemu-devel@nongnu.org; Thu, 21 Aug 2014 05:44:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XKOun-00089f-AH for qemu-devel@nongnu.org; Thu, 21 Aug 2014 05:44:22 -0400 Received: from mail-pd0-x22e.google.com ([2607:f8b0:400e:c02::22e]:40146) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XKOum-00089O-VO for qemu-devel@nongnu.org; Thu, 21 Aug 2014 05:44:17 -0400 Received: by mail-pd0-f174.google.com with SMTP id fp1so13557864pdb.5 for ; Thu, 21 Aug 2014 02:44:15 -0700 (PDT) Date: Thu, 21 Aug 2014 17:44:06 +0800 From: Liu Yuan Message-ID: <20140821094406.GC20657@ubuntu-trusty> References: <1408079117-15064-1-git-send-email-namei.unix@gmail.com> <1408079117-15064-3-git-send-email-namei.unix@gmail.com> <20140815135903.GB595@irqsave.net> <20140818055928.GA8722@ubuntu-trusty> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20140818055928.GA8722@ubuntu-trusty> Subject: Re: [Qemu-devel] [PATCH v5 2/2] block/quorum: add simple read pattern support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?iso-8859-1?Q?Beno=EEt?= Canet Cc: Kevin Wolf , qemu-devel@nongnu.org, Stefan Hajnoczi On Mon, Aug 18, 2014 at 01:59:28PM +0800, Liu Yuan wrote: > On Fri, Aug 15, 2014 at 03:59:04PM +0200, Benoît Canet wrote: > > The Friday 15 Aug 2014 à 13:05:17 (+0800), Liu Yuan wrote : > > > This patch adds single read pattern to quorum driver and quorum vote is default > > > pattern. > > > > > > For now we do a quorum vote on all the reads, it is designed for unreliable > > > underlying storage such as non-redundant NFS to make sure data integrity at the > > > cost of the read performance. > > > > > > For some use cases as following: > > > > > > VM > > > -------------- > > > | | > > > v v > > > A B > > > > > > Both A and B has hardware raid storage to justify the data integrity on its own. > > > So it would help performance if we do a single read instead of on all the nodes. > > > Further, if we run VM on either of the storage node, we can make a local read > > > request for better performance. > > > > > > This patch generalize the above 2 nodes case in the N nodes. That is, > > > > > > vm -> write to all the N nodes, read just one of them. If single read fails, we > > > try to read next node in FIFO order specified by the startup command. > > > > > > The 2 nodes case is very similar to DRBD[1] though lack of auto-sync > > > functionality in the single device/node failure for now. But compared with DRBD > > > we still have some advantages over it: > > > > > > - Suppose we have 20 VMs running on one(assume A) of 2 nodes' DRBD backed > > > storage. And if A crashes, we need to restart all the VMs on node B. But for > > > practice case, we can't because B might not have enough resources to setup 20 VMs > > > at once. So if we run our 20 VMs with quorum driver, and scatter the replicated > > > images over the data center, we can very likely restart 20 VMs without any > > > resource problem. > > > > > > After all, I think we can build a more powerful replicated image functionality > > > on quorum and block jobs(block mirror) to meet various High Availibility needs. > > > > > > E.g, Enable single read pattern on 2 children, > > > > > > -drive driver=quorum,children.0.file.filename=0.qcow2,\ > > > children.1.file.filename=1.qcow2,read-pattern=fifo,vote-threshold=1 > > > > > > [1] http://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device > > > > > > Cc: Benoit Canet > > > Cc: Eric Blake > > > Cc: Kevin Wolf > > > Cc: Stefan Hajnoczi > > > Signed-off-by: Liu Yuan > > > --- > > > block/quorum.c | 176 ++++++++++++++++++++++++++++++++++++++++++--------------- > > > 1 file changed, 129 insertions(+), 47 deletions(-) > > > > > > diff --git a/block/quorum.c b/block/quorum.c > > > index d5ee9c0..1235d7c 100644 > > > --- a/block/quorum.c > > > +++ b/block/quorum.c > > > @@ -24,6 +24,7 @@ > > > #define QUORUM_OPT_VOTE_THRESHOLD "vote-threshold" > > > #define QUORUM_OPT_BLKVERIFY "blkverify" > > > #define QUORUM_OPT_REWRITE "rewrite-corrupted" > > > +#define QUORUM_OPT_READ_PATTERN "read-pattern" > > > > > > /* This union holds a vote hash value */ > > > typedef union QuorumVoteValue { > > > @@ -74,6 +75,8 @@ typedef struct BDRVQuorumState { > > > bool rewrite_corrupted;/* true if the driver must rewrite-on-read corrupted > > > * block if Quorum is reached. > > > */ > > > + > > > + QuorumReadPattern read_pattern; > > > } BDRVQuorumState; > > > > > > typedef struct QuorumAIOCB QuorumAIOCB; > > > @@ -117,6 +120,7 @@ struct QuorumAIOCB { > > > > > > bool is_read; > > > int vote_ret; > > > + int child_iter; /* which child to read in fifo pattern */ > > > > I don't understand what "fifo pattern" could mean for a bunch of disk > > as they are not forming a queue. > > Naming isn't 100% accurate but as in Eric's comment (see below), both FIFO and > Round-Robin can be used for two different patterns. > > > Maybe round-robin is more suitable but your code does not implement > > round-robin since it will alway start from the first disk. > > > > Your code is scanning the disks set it's a scan pattern. > > > > That said is it a problem that the first disk will be accessed more often than the other ? > > As my commit log documented, the purpose of the read pattern I added is to > speed up read against quorum original read pattern. And the use case is clear > (I hope so) and you can take DRBD as a good example for why we need it. Of > course we are far away from DRBD, which need a recovery logic after all kinds of > failures. My patch set can be taken as a prelimitary step to implement a DRBD > like service driver. > > Eric previously commented on two read patterns that might be useful: > > "Should we offer multiple modes in addition to 'quorum'? For example, I > could see a difference between 'fifo' (favor read from the first quorum > member always, unless it fails, good when the first member is local and > other member is remote) and 'round-robin' (evenly distribute reads; each > read goes to the next available quorum member, good when all members are > equally distant)." > > > You will have to care to insert disks in different order on each QEMU to spread the load. > > This is another use case that my patch set didn't try to solve. > > > Shouldn't the code try to spread the load by circling on the disk like a real round robin pattern ? > > Probably not on my patch set, but we can add a yet another round robin pattern > if anyone is intrested. Benoit, ping...