From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39C50C10DCE for ; Fri, 13 Mar 2020 17:48:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F380F2072C for ; Fri, 13 Mar 2020 17:48:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="rTBDRlbN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F380F2072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9EB4E6B0005; Fri, 13 Mar 2020 13:48:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 99B646B0006; Fri, 13 Mar 2020 13:48:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B1806B0007; Fri, 13 Mar 2020 13:48:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id 707486B0005 for ; Fri, 13 Mar 2020 13:48:36 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 360FC4DCB for ; Fri, 13 Mar 2020 17:48:36 +0000 (UTC) X-FDA: 76591073832.25.scene01_6829e71e93842 X-HE-Tag: scene01_6829e71e93842 X-Filterd-Recvd-Size: 4932 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Mar 2020 17:48:35 +0000 (UTC) Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AD81D206E9; Fri, 13 Mar 2020 17:48:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584121714; bh=RTV73rV9tbgZdgxhjVpHf5cbOgrjF4jkzZeN0BW3HjQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=rTBDRlbNcvDK/M91AaxB/KKnHxKNQs5wEAymm0R3zmKwmKgdJygpE4ZsGPQ9coL9N OYVFjHI3f6jp625auhfZc1YBPcOjdbUwuqBfrp6tAIUn89ZPdKMAFrRhzWScGBrRKX LQ2mB2q61tTaRHFX3muuqAdvw4OuG3GM3e94uy3U= Date: Fri, 13 Mar 2020 19:48:27 +0200 From: Leon Romanovsky To: Vlastimil Babka Cc: Jaewon Kim , adobriyan@gmail.com, akpm@linux-foundation.org, labbott@redhat.com, sumit.semwal@linaro.org, minchan@kernel.org, ngupta@vflare.org, sergey.senozhatsky.work@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, jaewon31.kim@gmail.com, Linux API Subject: Re: [RFC PATCH 0/3] meminfo: introduce extra meminfo Message-ID: <20200313174827.GA67638@unreal> References: <20200311034441.23243-1-jaewon31.kim@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 13, 2020 at 04:19:36PM +0100, Vlastimil Babka wrote: > +CC linux-api, please include in future versions as well > > On 3/11/20 4:44 AM, Jaewon Kim wrote: > > /proc/meminfo or show_free_areas does not show full system wide memory > > usage status. There seems to be huge hidden memory especially on > > embedded Android system. Because it usually have some HW IP which do not > > have internal memory and use common DRAM memory. > > > > In Android system, most of those hidden memory seems to be vmalloc pages > > , ion system heap memory, graphics memory, and memory for DRAM based > > compressed swap storage. They may be shown in other node but it seems to > > useful if /proc/meminfo shows all those extra memory information. And > > show_mem also need to print the info in oom situation. > > > > Fortunately vmalloc pages is alread shown by commit 97105f0ab7b8 > > ("mm: vmalloc: show number of vmalloc pages in /proc/meminfo"). Swap > > memory using zsmalloc can be seen through vmstat by commit 91537fee0013 > > ("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo. > > > > Memory usage of specific driver can be various so that showing the usage > > through upstream meminfo.c is not easy. To print the extra memory usage > > of a driver, introduce following APIs. Each driver needs to count as > > atomic_long_t. > > > > int register_extra_meminfo(atomic_long_t *val, int shift, > > const char *name); > > int unregister_extra_meminfo(atomic_long_t *val); > > > > Currently register ION system heap allocator and zsmalloc pages. > > Additionally tested on local graphics driver. > > > > i.e) cat /proc/meminfo | tail -3 > > IonSystemHeap: 242620 kB > > ZsPages: 203860 kB > > GraphicDriver: 196576 kB > > > > i.e.) show_mem on oom > > <6>[ 420.856428] Mem-Info: > > <6>[ 420.856433] IonSystemHeap:32813kB ZsPages:44114kB GraphicDriver::13091kB > > <6>[ 420.856450] active_anon:957205 inactive_anon:159383 isolated_anon:0 > > I like the idea and the dynamic nature of this, so that drivers not present > wouldn't add lots of useless zeroes to the output. > It also makes simpler the decisions of "what is important enough to need its own > meminfo entry". > > The suggestion for hunting per-driver /sys files would only work if there was a > common name to such files so once can find(1) them easily. > It also doesn't work for the oom/failed alloc warning output. Of course there is a need to have a stable name for such an output, this is why driver/core should be responsible for that and not drivers authors. The use case which I had in mind slightly different than to look after OOM. I'm interested to optimize our drivers in their memory footprint to allow better scale in SR-IOV mode where one device creates many separate copies of itself. Those copies easily can take gigabytes of RAM due to the need to optimize for high-performance networking. Sometimes the amount of memory and not HW is actually limits the scale factor. So I would imagine this feature being used as an aid for the driver developers and not for the runtime decisions. My 2-cents. Thanks