From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52680) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZLifQ-000518-3O for qemu-devel@nongnu.org; Sat, 01 Aug 2015 22:06:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZLifM-0001he-U9 for qemu-devel@nongnu.org; Sat, 01 Aug 2015 22:06:24 -0400 Received: from mail-ig0-x236.google.com ([2607:f8b0:4001:c05::236]:36841) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZLifM-0001gy-PG for qemu-devel@nongnu.org; Sat, 01 Aug 2015 22:06:20 -0400 Received: by igbij6 with SMTP id ij6so36744810igb.1 for ; Sat, 01 Aug 2015 19:06:18 -0700 (PDT) Date: Sun, 2 Aug 2015 10:06:11 +0800 From: Liu Yuan Message-ID: <20150802020611.GA11733@ubuntu-trusty> References: <1438142555-27011-1-git-send-email-namei.unix@gmail.com> <877fpj4kqw.wl%mitake@mitake-jiseki-PC> <20150729093135.GB22681@ubuntu-trusty> <20150730132744.GA11022@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vasiliy Tolstov Cc: Kevin Wolf , Teruaki Ishizaki , Hitoshi Mitake , Hitoshi Mitake , Jeff Cody , qemu-devel@nongnu.org, sheepdog-ng@googlegroups.com, morita.kazutaka@gmail.com, Stefan Hajnoczi , sheepdog@lists.wpkg.org On Fri, Jul 31, 2015 at 03:08:09PM +0300, Vasiliy Tolstov wrote: > 2015-07-31 14:55 GMT+03:00 Vasiliy Tolstov : > > Liu's patch also works for me. But also like in Hitoshi patch breaks > > when using discards in qemu =(. > > > Please wait to performance comparison. As i see Liu's patch may be > more slow then Hitoshi. > Thanks for your time! Well, as far as I know, my patch would be slightly better in performance wise because it preserves the parallelism of requests. Due to scatter gather IO requests characteristics, we could assume following IO pattern as an illustration: req1 is split into 2 sheep reqs: create(2), create(10) req2 is split into 2 sheep reqs: create(5), create(100) So there are finally 4 sheep requests and with my patch they will be run in parallel by sheep cluster and only 4 unref of objects will be executed internally: update_inode(2), update_inode(10), update_inode(5), update_inode(100) With Hitoshi's patch, however, req1 and req2 will be serialized and only one req is finished then the other one will be sent to sheep and there are 9+96=105 unref of objects will be executed internally. There are still chances data corruption because update_inode(2,10) and update_inode(5,100) will both update the range [5,10], which is a potential problem if the overlapped range has different values when the requests are queued with stale data. This is really a several years bug: we should update the inode bits exactly as we create the objects, not update the bits we don't touch at all. This bug isn't revealed for a long time because most of the time, min == max in create_inode(min, max) and before we introduction of generation reference counting to the snapshot reference mechanism, updating inode bit with 0 won't cause a remove request in sheepdog. I'm also concerned with the complete new mechanism since current request handling mechanism is solid as time goes by. It exists for years. The complete new stuff might need a long time to stablize and need to fix the possible side effect we don't know yet. Thanks, Yuan