From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933623AbcH2NsS (ORCPT ); Mon, 29 Aug 2016 09:48:18 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:40391 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933594AbcH2NsQ (ORCPT ); Mon, 29 Aug 2016 09:48:16 -0400 X-IBM-Helo: d23dlp02.au.ibm.com X-IBM-MailFrom: khandual@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Mon, 29 Aug 2016 19:17:58 +0530 From: Anshuman Khandual User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Aaron Lu , Anshuman Khandual , Linux Memory Management List CC: "'Kirill A. Shutemov'" , Dave Hansen , Tim Chen , Huang Ying , Andrew Morton , Vlastimil Babka , Jerome Marchand , Andrea Arcangeli , Mel Gorman , Ebru Akagunduz , linux-kernel@vger.kernel.org, "Aneesh Kumar K.V" Subject: Re: [PATCH] thp: reduce usage of huge zero page's atomic counter References: <57C3F72C.6030405@linux.vnet.ibm.com> <3b8deaf7-2e7b-ff22-be72-31b1a7ebb3eb@intel.com> In-Reply-To: <3b8deaf7-2e7b-ff22-be72-31b1a7ebb3eb@intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16082913-0008-0000-0000-000000B750D0 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16082913-0009-0000-0000-000007F3F386 Message-Id: <57C43D0E.8060802@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-08-29_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608290138 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 08/29/2016 02:23 PM, Aaron Lu wrote: > On 08/29/2016 04:49 PM, Anshuman Khandual wrote: >> > On 08/29/2016 12:01 PM, Aaron Lu wrote: >>> >> The global zero page is used to satisfy an anonymous read fault. If >>> >> THP(Transparent HugePage) is enabled then the global huge zero page is used. >>> >> The global huge zero page uses an atomic counter for reference counting >>> >> and is allocated/freed dynamically according to its counter value. >>> >> >>> >> CPU time spent on that counter will greatly increase if there are >>> >> a lot of processes doing anonymous read faults. This patch proposes a >>> >> way to reduce the access to the global counter so that the CPU load >>> >> can be reduced accordingly. >>> >> >>> >> To do this, a new flag of the mm_struct is introduced: MMF_USED_HUGE_ZERO_PAGE. >>> >> With this flag, the process only need to touch the global counter in >>> >> two cases: >>> >> 1 The first time it uses the global huge zero page; >>> >> 2 The time when mm_user of its mm_struct reaches zero. >>> >> >>> >> Note that right now, the huge zero page is eligible to be freed as soon >>> >> as its last use goes away. With this patch, the page will not be >>> >> eligible to be freed until the exit of the last process from which it >>> >> was ever used. >>> >> >>> >> And with the use of mm_user, the kthread is not eligible to use huge >>> >> zero page either. Since no kthread is using huge zero page today, there >>> >> is no difference after applying this patch. But if that is not desired, >>> >> I can change it to when mm_count reaches zero. >>> >> >>> >> Case used for test on Haswell EP: >>> >> usemem -n 72 --readonly -j 0x200000 100G >> > >> > Is this benchmark publicly available ? Does not seem to be this one >> > https://github.com/gnubert/usemem.git, Does it ? > Sorry, forgot to attach its link. > It's this one: > https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git > > And the above mentioned usemem is: > https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/tree/usemem.c Hey Aaron, Thanks for pointing out. I did ran similar test on a POWER8 box using 16M steps (huge page size is 16MB on it) instead of 2MB. But the perf profile looked different. The perf command line was like this on a 32 CPU system. perf record ./usemem -n 256 --readonly -j 0x1000000 100G But the relative weight of the above mentioned function came out to be pretty less compared to what you have reported from your experiment which is around 54.03%. 0.07% usemem [kernel.vmlinux] [k] get_huge_zero_page Seems way out of the mark. Can you please confirm your exact perf record command line and how many CPUs you have on the system. - Anshuman