From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97BC2C43381 for ; Mon, 25 Mar 2019 23:48:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7051D2083D for ; Mon, 25 Mar 2019 23:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726616AbfCYXsm (ORCPT ); Mon, 25 Mar 2019 19:48:42 -0400 Received: from ipmail07.adl2.internode.on.net ([150.101.137.131]:48541 "EHLO ipmail07.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726061AbfCYXsm (ORCPT ); Mon, 25 Mar 2019 19:48:42 -0400 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail07.adl2.internode.on.net with ESMTP; 26 Mar 2019 10:18:40 +1030 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1h8ZKY-0003mR-Hl; Tue, 26 Mar 2019 10:48:38 +1100 Date: Tue, 26 Mar 2019 10:48:38 +1100 From: Dave Chinner To: Amir Goldstein Cc: Matthew Wilcox , "Darrick J. Wong" , linux-xfs , Christoph Hellwig , linux-fsdevel Subject: Re: [QUESTION] Long read latencies on mixed rw buffered IO Message-ID: <20190325234838.GC23020@dastard> References: <20190325001044.GA23020@dastard> <20190325154731.GT1183@magnolia> <20190325164129.GH10344@bombadil.infradead.org> <20190325182239.GI10344@bombadil.infradead.org> <20190325194021.GJ10344@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Mar 25, 2019 at 09:57:46PM +0200, Amir Goldstein wrote: > On Mon, Mar 25, 2019 at 9:40 PM Matthew Wilcox wrote: > > > > On Mon, Mar 25, 2019 at 09:18:51PM +0200, Amir Goldstein wrote: > > > On Mon, Mar 25, 2019 at 8:22 PM Matthew Wilcox wrote: > > > > On Mon, Mar 25, 2019 at 07:30:39PM +0200, Amir Goldstein wrote: > > > > > On Mon, Mar 25, 2019 at 6:41 PM Matthew Wilcox wrote: > > > > > > I think it is a bug that we only wake readers at the front of the queue; > > > > > > I think we would get better performance if we wake all readers. ie here: > > > > > > So I have no access to the test machine of former tests right now, > > > but when running the same filebench randomrw workload > > > (8 writers, 8 readers) on VM with 2 CPUs and SSD drive, results > > > are not looking good for this patch: > > > > > > --- v5.1-rc1 / xfs --- > > > rand-write1 852404ops 14202ops/s 110.9mb/s 0.6ms/op > > > [0.01ms - 553.45ms] > > > rand-read1 26117ops 435ops/s 3.4mb/s 18.4ms/op > > > [0.04ms - 632.29ms] > > > 61.088: IO Summary: 878521 ops 14636.774 ops/s 435/14202 rd/wr > > > 114.3mb/s 1.1ms/op > > > > > --- v5.1-rc1 / xfs + patch v2 below --- > rand-write1 852487ops 14175ops/s 110.7mb/s 0.6ms/op > [0.01ms - 755.24ms] > rand-read1 23194ops 386ops/s 3.0mb/s 20.7ms/op > [0.03ms - 755.25ms] > 61.187: IO Summary: 875681 ops 14560.980 ops/s 386/14175 rd/wr > 113.8mb/s 1.1ms/op > > Not as bad as v1. Only a little bit worse than master... > The whole deal with the read/write balance and on SSD, I imagine > the balance really changes. That's why I was skeptical about > one-size-fits all read/write balance. You're not testing your SSD. You're testing writes into cache vs reads from disk. There is a massive latency difference in the two operations, so unless you use O_DSYNC for the writes, you are going to see this cache-vs-uncached performance unbalance. i.e. unless the rwsem is truly fair, there is always going to be more writer access to the lock because they spend less time holding it and so can put much more pressure on it. Cheers, Dave. -- Dave Chinner david@fromorbit.com