From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35843C5B57D for ; Wed, 3 Jul 2019 00:42:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0E552218B0 for ; Wed, 3 Jul 2019 00:42:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="QWyJjuDY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727229AbfGCAml (ORCPT ); Tue, 2 Jul 2019 20:42:41 -0400 Received: from mail-ot1-f67.google.com ([209.85.210.67]:39046 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727080AbfGCAml (ORCPT ); Tue, 2 Jul 2019 20:42:41 -0400 Received: by mail-ot1-f67.google.com with SMTP id r21so29834otq.6 for ; Tue, 02 Jul 2019 17:42:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0AGSyMvf0zexOMokX4EpC11/R6d7poyoJeo3TShyztM=; b=QWyJjuDYJNyB44jlcWkT2Do84tRKoTgiiRWbe8c+iRWLpEQXaFEgT0w9ZSZH2Cs9Tf NgpgZ8pPPSpSGIZa0MJeB+dtoragQ2H4oPc7bonqaiiNZhCo2ZOJ7wWuogIDhfx9e4s/ ZfZgI41C2VLSea49134z094gXVVtjlcxZP9bHzuIY6XmQJlEzTwh4QKqYp1GvIhzTBqP C0rfgeI6tqW1a8lFZkStsJB+FEkRAIaU3kaEYBQPQGtfGa8CabE6/0FtfDI1lC7Xc0F+ 6+LCGjufHIildbzfwK5DH2jwZiLcBGxTCOHmP+FN42+bD0bPJuwOEVfltIw786Xw5ebf Gk3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0AGSyMvf0zexOMokX4EpC11/R6d7poyoJeo3TShyztM=; b=Tg6Zmav0ornXRdH57Oe5T9oaL3lswvtqnWb0dVP0d4hywLPGH4VUj6TAYq7sYeV1Pb 35+Ji+lXY/qtLAmyj6LInYDUVFP+R6lZDpWf00J5s2OOLnTB3yB/Xfh8awJ2y1iDhTvU mkA369U81lX1l9VCYML6mMuPFTbEKRBWoERXQlAKcDnRKnFj2ML/Khhlnppjk4kwjEp/ T/dDmzSYBJ/jh+udErBfJkhuISuD1STjXYpq5ZooNqGlkQV5P9bB0+H4Y1XuFERgEFgU zsQeKk8gQNm41Yb5iY8fCHQBvz0IUkBRp/6s4Y2ZuXMIwX3Nlv1Lu/7d/ARgE/M7ofRZ XEDA== X-Gm-Message-State: APjAAAVIlUxUzkm1T+3zwaZ5m5B2AAx8/5QA4mw03stGiLWgZ6TGVGj6 SKUtphmB0tNWY+0FhlZzkdS/aoPxoT9aWsViAE/BkQ== X-Google-Smtp-Source: APXvYqwt/AruSNRNxTjRUQk6hQeRuA06O8pI4+Ci7eZfCus0eFiY7ywxyleud1COJr+2irFyLNDBtxvmvCpYM8UHPzM= X-Received: by 2002:a9d:7a8b:: with SMTP id l11mr25111745otn.247.1562114560450; Tue, 02 Jul 2019 17:42:40 -0700 (PDT) MIME-Version: 1.0 References: <20190627195948.GB4286@bombadil.infradead.org> <20190629160336.GB1180@bombadil.infradead.org> <20190630152324.GA15900@bombadil.infradead.org> <20190702033410.GB1729@bombadil.infradead.org> In-Reply-To: From: Dan Williams Date: Tue, 2 Jul 2019 17:42:28 -0700 Message-ID: Subject: Re: [PATCH] filesystem-dax: Disable PMD support To: Boaz Harrosh Cc: Matthew Wilcox , Seema Pandit , linux-nvdimm , Linux Kernel Mailing List , stable , Robert Barror , linux-fsdevel , Jan Kara Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Tue, Jul 2, 2019 at 5:23 PM Boaz Harrosh wrote: > > On 02/07/2019 18:37, Dan Williams wrote: > <> > > > > I'd be inclined to do the brute force fix of not trying to get fancy > > with separate PTE/PMD waitqueues and then follow on with a more clever > > performance enhancement later. Thoughts about that? > > > > Sir Dan > > I do not understand how separate waitqueues are any performance enhancement? > The all point of the waitqueues is that there is enough of them and the hash > function does a good radomization spread to effectively grab a single locker > per waitqueue unless the system is very contended and waitqueues are shared. Right, the fix in question limits the input to the hash calculation by masking the input to always be 2MB aligned. > Which is good because it means you effectively need a back pressure to the app. > (Because pmem IO is mostly CPU bound with no long term sleeps I do not think > you will ever get to that situation) > > So the way I understand it having twice as many waitqueues serving two types > will be better performance over all then, segregating the types each with half > the number of queues. Yes, but the trick is how to manage cases where someone waiting on one type needs to be woken up by an event on the other. So all I'm saying it lets live with more hash collisions until we can figure out a race free way to better scale waitqueue usage. > > (Regardless of the above problem of where the segregation is not race clean) > > Thanks > Boaz