From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 9DB65212AC481 for ; Tue, 2 Jul 2019 17:22:53 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id d4so262421edr.13 for ; Tue, 02 Jul 2019 17:22:53 -0700 (PDT) Subject: Re: [PATCH] filesystem-dax: Disable PMD support References: <20190627195948.GB4286@bombadil.infradead.org> <20190629160336.GB1180@bombadil.infradead.org> <20190630152324.GA15900@bombadil.infradead.org> <20190702033410.GB1729@bombadil.infradead.org> From: Boaz Harrosh Message-ID: Date: Wed, 3 Jul 2019 03:22:49 +0300 MIME-Version: 1.0 In-Reply-To: Content-Language: en-MW List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams , Matthew Wilcox Cc: Seema Pandit , linux-nvdimm , Linux Kernel Mailing List , stable , Robert Barror , linux-fsdevel , Jan Kara List-ID: On 02/07/2019 18:37, Dan Williams wrote: <> > > I'd be inclined to do the brute force fix of not trying to get fancy > with separate PTE/PMD waitqueues and then follow on with a more clever > performance enhancement later. Thoughts about that? > Sir Dan I do not understand how separate waitqueues are any performance enhancement? The all point of the waitqueues is that there is enough of them and the hash function does a good radomization spread to effectively grab a single locker per waitqueue unless the system is very contended and waitqueues are shared. Which is good because it means you effectively need a back pressure to the app. (Because pmem IO is mostly CPU bound with no long term sleeps I do not think you will ever get to that situation) So the way I understand it having twice as many waitqueues serving two types will be better performance over all then, segregating the types each with half the number of queues. (Regardless of the above problem of where the segregation is not race clean) Thanks Boaz _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm