From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756489AbcECVAn (ORCPT ); Tue, 3 May 2016 17:00:43 -0400 Received: from mga11.intel.com ([192.55.52.93]:63261 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756112AbcECVAl (ORCPT ); Tue, 3 May 2016 17:00:41 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,574,1455004800"; d="scan'208";a="798049880" Message-ID: <1462309239.21143.6.camel@linux.intel.com> Subject: [PATCH 0/7] mm: Improve swap path scalability with batched operations From: Tim Chen To: Andrew Morton , Vladimir Davydov , Johannes Weiner , Michal Hocko , Minchan Kim , Hugh Dickins Cc: "Kirill A.Shutemov" , Andi Kleen , Aaron Lu , Huang Ying , linux-mm , linux-kernel@vger.kernel.org Date: Tue, 03 May 2016 14:00:39 -0700 In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.18.5.2 (3.18.5.2-1.fc23) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The page swap out path is not scalable due to the numerous locks acquired and released along the way, which are all executed on a page by page basis, e.g.: 1. The acquisition of the mapping tree lock in swap cache when adding a page to swap cache, and then again when deleting a page from swap cache after it has been swapped out.  2. The acquisition of the lock on swap device to allocate a swap slot for a page to be swapped out.  With the advent of high speed block devices that's several orders   of magnitude faster than the old spinning disks, these bottlenecks become fairly significant, especially on server class machines with many theads running.  To reduce these locking costs, this patch series attempt to batch the pages on the following oprations needed on for swap: 1. Allocate swap slots in large batches, so locks on the swap device don't need to be acquired as often.  2. Add anonymous pages to the swap cache for the same swap device in              batches, so the mapping tree lock can be acquired less. 3. Delete pages from swap cache also in batches. We experimented the effect of this patches. We set up N threads to access memory in excess of memory capcity, causing swap.  In experiments using a single pmem based fast block device on a 2 socket machine, we saw that for 1 thread, there is a ~25% increase in swap throughput and for 16 threads, the swap throughput increase by ~85%, when compared with the vanilla kernel. Batching helps even for 1 thread because of contention with kswapd when doing direct memory reclaim. Feedbacks and reviews to this patch series are much appreciated. Thanks. Tim Tim Chen (7):   mm: Cleanup - Reorganize the shrink_page_list code into smaller     functions   mm: Group the processing of anonymous pages to be swapped in     shrink_page_list   mm: Add new functions to allocate swap slots in batches   mm: Shrink page list batch allocates swap slots for page swapping   mm: Batch addtion of pages to swap cache   mm: Cleanup - Reorganize code to group handling of page   mm: Batch unmapping of pages that are in swap cache  include/linux/swap.h |  29 ++-  mm/swap_state.c      | 253 +++++++++++++-----  mm/swapfile.c        | 215 +++++++++++++--  mm/vmscan.c          | 725 ++++++++++++++++++++++++++++++++++++++-------------  4 files changed, 945 insertions(+), 277 deletions(-) --  2.5.5