From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754118AbdEISzM (ORCPT ); Tue, 9 May 2017 14:55:12 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:38463 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751471AbdEISzK (ORCPT ); Tue, 9 May 2017 14:55:10 -0400 Subject: Re: [v3 0/9] parallelized "struct page" zeroing To: Michal Hocko Cc: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net References: <1494003796-748672-1-git-send-email-pasha.tatashin@oracle.com> <20170509181234.GA4397@dhcp22.suse.cz> From: Pasha Tatashin Message-ID: Date: Tue, 9 May 2017 14:54:50 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0 MIME-Version: 1.0 In-Reply-To: <20170509181234.GA4397@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Source-IP: userv0021.oracle.com [156.151.31.71] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michal, > I like the idea of postponing the zeroing from the allocation to the > init time. To be honest the improvement looks much larger than I would > expect (Btw. this should be a part of the changelog rather than a > outside link). The improvements are larger, because this time was never measured, as Linux does not have early boot time stamps. I added them for x86 and SPARC to emasure the performance. I am pushing those changes through separate patchsets. > > The implementation just looks too large to what I would expect. E.g. do > we really need to add zero argument to the large part of the memblock > API? Wouldn't it be easier to simply export memblock_virt_alloc_internal > (or its tiny wrapper memblock_virt_alloc_core) and move the zeroing > outside to its 2 callers? A completely untested scratched version at the > end of the email. I am OK, with this change. But, I do not really see a difference between: memblock_virt_alloc_raw() and memblock_virt_alloc_core() In both cases we use memblock_virt_alloc_internal(), but the only difference is that in my case we tell memblock_virt_alloc_internal() to zero the pages if needed, and in your case the other two callers are zeroing it. I like moving memblock_dbg() inside memblock_virt_alloc_internal() > > Also it seems that this is not 100% correct either as it only cares > about VMEMMAP while DEFERRED_STRUCT_PAGE_INIT might be enabled also for > SPARSEMEM. This would suggest that we would zero out pages twice, > right? Thank you, I will check this combination before sending out the next patch. > > A similar concern would go to the memory hotplug patch which will > fall back to the slab/page allocator IIRC. On the other hand > __init_single_page is shared with the hotplug code so again we would > initialize 2 times. Correct, when memory it hotplugged, to gain the benefit of this fix, and also not to regress by actually double zeroing "struct pages" we should not zero it out. However, I do not really have means to test it. > > So I suspect more changes are needed. I will have a closer look tomorrow. Thank you for reviewing this work. I will wait for your comments before sending out updated patches. Pasha