From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754781AbbIBOXT (ORCPT ); Wed, 2 Sep 2015 10:23:19 -0400 Received: from mga03.intel.com ([134.134.136.65]:5178 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752464AbbIBOXS (ORCPT ); Wed, 2 Sep 2015 10:23:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,453,1437462000"; d="scan'208";a="781211605" Subject: Re: [PATCH] dax, pmem: add support for msync To: Boaz Harrosh , Dave Chinner , Ross Zwisler , Christoph Hellwig , linux-kernel@vger.kernel.org, Alexander Viro , Andrew Morton , "H. Peter Anvin" , Hugh Dickins , Ingo Molnar , "Kirill A. Shutemov" , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@ml01.01.org, Matthew Wilcox , Peter Zijlstra , Thomas Gleixner , x86@kernel.org References: <1441047584-14664-1-git-send-email-ross.zwisler@linux.intel.com> <20150831233803.GO3902@dastard> <20150901070608.GA5482@lst.de> <20150901222120.GQ3902@dastard> <20150902031945.GA8916@linux.intel.com> <20150902051711.GS3902@dastard> <55E6CF15.4070105@plexistor.com> From: Dave Hansen Message-ID: <55E70653.4090302@linux.intel.com> Date: Wed, 2 Sep 2015 07:23:15 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <55E6CF15.4070105@plexistor.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/02/2015 03:27 AM, Boaz Harrosh wrote: >> > Yet you're ignoring the fact that flushing the entire range of the >> > relevant VMAs may not be very efficient. It may be a very >> > large mapping with only a few pages that need flushing from the >> > cache, but you still iterate the mappings flushing GB ranges from >> > the cache at a time. >> > > So actually you are wrong about this. We have a working system and as part > of our testing rig we do performance measurements, constantly. Our random > mmap 4k writes test preforms very well and is in par with the random-direct-write > implementation even though on every unmap, we do a VMA->start/end cl_flushing. > > The cl_flush operation is a no-op if the cacheline is not dirty and is a > memory bus storm with all the CLs that are dirty. So the only cost > is the iteration of vma->start-to-vma->end i+=64 I'd be curious what the cost is in practice. Do you have any actual numbers of the cost of doing it this way? Even if the instruction is a "noop", I'd really expect the overhead to really add up for a tens-of-gigabytes mapping, no matter how much the CPU optimizes it.