From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([198.137.202.133]:32770 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726978AbeH0LYg (ORCPT ); Mon, 27 Aug 2018 07:24:36 -0400 Date: Mon, 27 Aug 2018 00:39:06 -0700 From: Christoph Hellwig Subject: Re: [PATCH v2 2/3] xfs: Prevent multiple wakeups of the same log space waiter Message-ID: <20180827073906.GA24831@infradead.org> References: <1535316795-21560-1-git-send-email-longman@redhat.com> <1535316795-21560-3-git-send-email-longman@redhat.com> <20180827002134.GE2234@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180827002134.GE2234@dastard> Sender: linux-xfs-owner@vger.kernel.org List-ID: List-Id: xfs To: Dave Chinner Cc: Waiman Long , "Darrick J. Wong" , Ingo Molnar , Peter Zijlstra , linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org On Mon, Aug 27, 2018 at 10:21:34AM +1000, Dave Chinner wrote: > tl; dr: Once you pass a certain point, ramdisks can be *much* slower > than SSDs on journal intensive workloads like AIM7. Hence it would be > useful to see if you have the same problems on, say, high > performance nvme SSDs. Note that all these ramdisk issues you mentioned below will also apply to using the pmem driver on nvdimms, which might be a more realistic version. Even worse at least for cases where the nvdimms aren't actually powerfail dram of some sort with write through caching and ADR the latency is going to be much higher than the ramdisk as well.