From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752594AbcISP7a (ORCPT ); Mon, 19 Sep 2016 11:59:30 -0400 Received: from mga05.intel.com ([192.55.52.43]:58480 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbcISP72 (ORCPT ); Mon, 19 Sep 2016 11:59:28 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,362,1470726000"; d="scan'208";a="10990743" Message-ID: <1474300762.3916.103.camel@linux.intel.com> Subject: Re: [PATCH -v3 00/10] THP swap: Delay splitting THP during swapping out From: Tim Chen To: Minchan Kim , "Chen, Tim C" Cc: "Huang, Ying" , Andrew Morton , "Hansen, Dave" , "Kleen, Andi" , "Lu, Aaron" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Hugh Dickins , Shaohua Li , Rik van Riel , Andrea Arcangeli , "Kirill A . Shutemov" , Vladimir Davydov , Johannes Weiner , Michal Hocko Date: Mon, 19 Sep 2016 08:59:22 -0700 In-Reply-To: <20160919071153.GB4083@bbox> References: <1473266769-2155-1-git-send-email-ying.huang@intel.com> <20160909054336.GA2114@bbox> <87sht824n3.fsf@yhuang-mobile.sh.intel.com> <20160913061349.GA4445@bbox> <87y42wgv5r.fsf@yhuang-dev.intel.com> <20160913070524.GA4973@bbox> <87vay0ji3m.fsf@yhuang-mobile.sh.intel.com> <20160913091652.GB7132@bbox> <045D8A5597B93E4EBEDDCBF1FC15F50935BF9343@fmsmsx104.amr.corp.intel.com> <20160919071153.GB4083@bbox> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.18.5.2 (3.18.5.2-1.fc23) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2016-09-19 at 16:11 +0900, Minchan Kim wrote: > Hi Tim, > > On Tue, Sep 13, 2016 at 11:52:27PM +0000, Chen, Tim C wrote: > > > > > > > > > > > > > > > > > - Avoid CPU time for splitting, collapsing THP across swap out/in. > > > Yes, if you want, please give us how bad it is. > > > > > It could be pretty bad.  In an experiment with THP turned on and we > > enter swap, 50% of the cpu are spent in the page compaction path.   > It's page compaction overhead, especially, pageblock_pfn_to_page. > Why is it related to overhead THP split for swapout? > I don't understand. Today you have to split a large page into 4K pages to swap it out. Then after you swap in all the 4K pages, you have to re-compact them back into a large page. If you can swap the large page out as a contiguous unit, and swap it back in as a single large page, the splitting and re-compaction back into a large page can be avoided. Tim