From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5937BC5ACD6 for ; Wed, 18 Mar 2020 10:58:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E29F920768 for ; Wed, 18 Mar 2020 10:58:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="ABGGj5t1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E29F920768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 45FE46B0082; Wed, 18 Mar 2020 06:58:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 412BB6B0083; Wed, 18 Mar 2020 06:58:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3292D6B0085; Wed, 18 Mar 2020 06:58:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 1B8A06B0082 for ; Wed, 18 Mar 2020 06:58:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B100545AB for ; Wed, 18 Mar 2020 10:58:20 +0000 (UTC) X-FDA: 76608183960.08.brass48_e9973d2dfa28 X-HE-Tag: brass48_e9973d2dfa28 X-Filterd-Recvd-Size: 8761 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 10:58:20 +0000 (UTC) Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AEF6520768; Wed, 18 Mar 2020 10:58:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584529099; bh=MkDsN2Eq2l8VIZO5CKor3u0iTn/XfXUIR6VH4pbo2VE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ABGGj5t1ki/X3Ld4aDl0EVAb5f3hFnHX3kNMM2/faz3vwQmNZJGbLUZtj/rrT+pzB wnO9PlaYuaMngKaY3fHIiq9yPccg5QmVmjyNzug+WtgcNHARXLIU3COndGKK56+vf3 8lc7xAs7Qf+niJz+XfsivFYzEC+feoutPzG5ZRNA= Date: Wed, 18 Mar 2020 12:58:15 +0200 From: Leon Romanovsky To: Jaewon Kim Cc: Jaewon Kim , Vlastimil Babka , adobriyan@gmail.com, Andrew Morton , Laura Abbott , Sumit Semwal , minchan@kernel.org, ngupta@vflare.org, sergey.senozhatsky.work@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Linux API Subject: Re: [RFC PATCH 0/3] meminfo: introduce extra meminfo Message-ID: <20200318105815.GV3351@unreal> References: <20200311034441.23243-1-jaewon31.kim@samsung.com> <20200313174827.GA67638@unreal> <5E6EFB6C.7050105@samsung.com> <20200316083154.GF8510@unreal> <20200317143715.GI3351@unreal> <5E71E2CB.4030704@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <5E71E2CB.4030704@samsung.com> Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 18, 2020 at 05:58:51PM +0900, Jaewon Kim wrote: > > > On 2020=EB=85=84 03=EC=9B=94 17=EC=9D=BC 23:37, Leon Romanovsky wrote: > > On Tue, Mar 17, 2020 at 12:04:46PM +0900, Jaewon Kim wrote: > >> 2020=EB=85=84 3=EC=9B=94 16=EC=9D=BC (=EC=9B=94) =EC=98=A4=ED=9B=84 = 5:32, Leon Romanovsky =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84= =B1: > >>> On Mon, Mar 16, 2020 at 01:07:08PM +0900, Jaewon Kim wrote: > >>>> > >>>> On 2020=EB=85=84 03=EC=9B=94 14=EC=9D=BC 02:48, Leon Romanovsky wr= ote: > >>>>> On Fri, Mar 13, 2020 at 04:19:36PM +0100, Vlastimil Babka wrote: > >>>>>> +CC linux-api, please include in future versions as well > >>>>>> > >>>>>> On 3/11/20 4:44 AM, Jaewon Kim wrote: > >>>>>>> /proc/meminfo or show_free_areas does not show full system wide= memory > >>>>>>> usage status. There seems to be huge hidden memory especially o= n > >>>>>>> embedded Android system. Because it usually have some HW IP whi= ch do not > >>>>>>> have internal memory and use common DRAM memory. > >>>>>>> > >>>>>>> In Android system, most of those hidden memory seems to be vmal= loc pages > >>>>>>> , ion system heap memory, graphics memory, and memory for DRAM = based > >>>>>>> compressed swap storage. They may be shown in other node but it= seems to > >>>>>>> useful if /proc/meminfo shows all those extra memory informatio= n. And > >>>>>>> show_mem also need to print the info in oom situation. > >>>>>>> > >>>>>>> Fortunately vmalloc pages is alread shown by commit 97105f0ab7b= 8 > >>>>>>> ("mm: vmalloc: show number of vmalloc pages in /proc/meminfo").= Swap > >>>>>>> memory using zsmalloc can be seen through vmstat by commit 9153= 7fee0013 > >>>>>>> ("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo. > >>>>>>> > >>>>>>> Memory usage of specific driver can be various so that showing = the usage > >>>>>>> through upstream meminfo.c is not easy. To print the extra memo= ry usage > >>>>>>> of a driver, introduce following APIs. Each driver needs to cou= nt as > >>>>>>> atomic_long_t. > >>>>>>> > >>>>>>> int register_extra_meminfo(atomic_long_t *val, int shift, > >>>>>>> const char *name); > >>>>>>> int unregister_extra_meminfo(atomic_long_t *val); > >>>>>>> > >>>>>>> Currently register ION system heap allocator and zsmalloc pages= . > >>>>>>> Additionally tested on local graphics driver. > >>>>>>> > >>>>>>> i.e) cat /proc/meminfo | tail -3 > >>>>>>> IonSystemHeap: 242620 kB > >>>>>>> ZsPages: 203860 kB > >>>>>>> GraphicDriver: 196576 kB > >>>>>>> > >>>>>>> i.e.) show_mem on oom > >>>>>>> <6>[ 420.856428] Mem-Info: > >>>>>>> <6>[ 420.856433] IonSystemHeap:32813kB ZsPages:44114kB Graphi= cDriver::13091kB > >>>>>>> <6>[ 420.856450] active_anon:957205 inactive_anon:159383 isol= ated_anon:0 > >>>>>> I like the idea and the dynamic nature of this, so that drivers = not present > >>>>>> wouldn't add lots of useless zeroes to the output. > >>>>>> It also makes simpler the decisions of "what is important enough= to need its own > >>>>>> meminfo entry". > >>>>>> > >>>>>> The suggestion for hunting per-driver /sys files would only work= if there was a > >>>>>> common name to such files so once can find(1) them easily. > >>>>>> It also doesn't work for the oom/failed alloc warning output. > >>>>> Of course there is a need to have a stable name for such an outpu= t, this > >>>>> is why driver/core should be responsible for that and not drivers= authors. > >>>>> > >>>>> The use case which I had in mind slightly different than to look = after OOM. > >>>>> > >>>>> I'm interested to optimize our drivers in their memory footprint = to > >>>>> allow better scale in SR-IOV mode where one device creates many s= eparate > >>>>> copies of itself. Those copies easily can take gigabytes of RAM d= ue to > >>>>> the need to optimize for high-performance networking. Sometimes t= he > >>>>> amount of memory and not HW is actually limits the scale factor. > >>>>> > >>>>> So I would imagine this feature being used as an aid for the driv= er > >>>>> developers and not for the runtime decisions. > >>>>> > >>>>> My 2-cents. > >>>>> > >>>>> Thanks > >>>>> > >>>>> > >>>> Thank you for your comment. > >>>> My idea, I think, may be able to help each driver developer to see= their memory usage. > >>>> But I'd like to see overall memory usage through the one node. > >>> It is more than enough :). > >>> > >>>> Let me know if you have more comment. > >>>> I am planning to move my logic to be shown on a new node, /proc/me= minfo_extra at v2. > >>> Can you please help me to understand how that file will look like o= nce > >>> many drivers will start to use this interface? Will I see multiple > >>> lines? > >>> > >>> Something like: > >>> driver1 .... > >>> driver2 .... > >>> driver3 .... > >>> ... > >>> driver1000 .... > >>> > >>> How can we extend it to support subsystems core code? > >> I do not have a plan to support subsystem core. > > Fair enough. > > > >> I just want the /proc/meminfo_extra to show size of alloc_pages APIs > >> rather than slub size. It is to show hidden huge memory. > >> I think most of drivers do not need to register its size to > >> /proc/meminfo_extra because > >> drivers usually use slub APIs and rather than alloc_pages APIs. > >> /proc/slabinfo helps for slub size in detail. > > The problem with this statement that the drivers that consuming memor= y > > are the ones who are interested in this interface. I can be not accur= ate > > here, but I think that all RDMA and major NICs will want to get this > > information. > > > > On my machine, it is something like 6 devices. > > > >> As a candidate of /proc/meminfo_extra, I hope only few drivers using > >> huge memory like over 100 MB got from alloc_pages APIs. > >> > >> As you say, if there is a static node on /sys for each driver, it ma= y > >> be used for all the drivers. > >> I think sysfs class way may be better to show categorized sum size. > >> But /proc/meminfo_extra can be another way to show those hidden huge= memory. > >> I mean your idea and my idea is not exclusive. > > It is just better to have one interface. > Sorry about that one interface. > > If we need to create a-meminfo_extra-like node on /sysfs, then > I think further discussion with more people is needed. > If there is no logical problem on creating /proc/meminfo_extra, > I'd like to prepare v2 patch and get more comment on that v2 > patch. Please help again for further discussion. No problem, but can you please the summary of that discussion in the cover letter of v2 and add Greg KH as the driver/core maintainer? It will save from us to go in circles. Thanks > > Thank you > > > >> Thank you > >>> Thanks > >>> > >>>> Thank you > >>>> Jaewon Kim > > >