From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0DE2C3524B for ; Tue, 4 Feb 2020 06:23:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 495EA21582 for ; Tue, 4 Feb 2020 06:23:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ojbKfVqM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 495EA21582 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC9E96B0003; Tue, 4 Feb 2020 01:23:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C54476B0005; Tue, 4 Feb 2020 01:23:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACC1C6B0006; Tue, 4 Feb 2020 01:23:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 8A0056B0003 for ; Tue, 4 Feb 2020 01:23:34 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2D4138248047 for ; Tue, 4 Feb 2020 06:23:34 +0000 (UTC) X-FDA: 76451453148.11.bead88_3c2375272833a X-HE-Tag: bead88_3c2375272833a X-Filterd-Recvd-Size: 26780 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Feb 2020 06:23:33 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id d9so6832764plo.11 for ; Mon, 03 Feb 2020 22:23:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=8E+UZz4tOW3o5/5fziQljLgAZHX3cCZHEip5ctU5Jio=; b=ojbKfVqMYqyn0dIKgNR1QhUb38j2b7fh8ovJRcmrc811QUG2n4ZfTqkxwykStmbV2x H9FGT182YLcLjH6SGXVXwWRm+rHMaN5XZoCu77o2oCVM90inj7XOhVx1dXb5OAi62XiW hfb39cPEHaIdIaWhIlnzbTvgnUIvDyO56r9P9g92b1rOboGG+dc3BZlrY5DihI+6blF1 WEA3GLTYegw62huBx8MT+PqY7NL5387m6308rBJqkU/BW9GvrQ3VyXZjoHBNcLjnIiKS RoqfMryW2NUN396cBoI1Ud74MS7JO5CnGO9051NBqNE63P0mqts9y0itTcBho1xx+19y Yy3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=8E+UZz4tOW3o5/5fziQljLgAZHX3cCZHEip5ctU5Jio=; b=NGel/0fIVmuVTZyAaNRf1cVcAS6xFGXc0z/oCWR0kltbvNQMvTSOaAO5Ecs4L7BbJF pzvmOIU/GDqS7SzJL5vprF2fe0NbGO26ycgsFCYQPiApV0yy0P9ZgPbho+woWRaqBY/7 +wS0MIR6eT9jKZjN03Ecjz1IWOl9chPa7gjJJK/l1K38KuVilNOAF2wzGcqAoFWBVznl 3Eih8PVlnQqgwuguSsN/7TGyfFWMHK9XFNWDACF/ITmIFkQawDCIh4YNochzBx8wR1xP wt1+egyWd7kFkljo6F9Aeq5JL3v08IbsP7hYql3zQwe3j5eEG+3lhxAfrBkjhrXEExv+ Vj0w== X-Gm-Message-State: APjAAAXHnz8/BdLPjByjKIVc+NPqteMz3BZkKk9T9oBM1nwIrLi2GKAu 54P3oWz2t+fc8eBiS0NbGUg= X-Google-Smtp-Source: APXvYqy1qla9VL3+vZcmuLr+tq//4LOViXgdZtY1InTnMDpq5QDlv4zefug3GRVZCN0uT71tBEBHyQ== X-Received: by 2002:a17:90a:da03:: with SMTP id e3mr4268630pjv.57.1580797411520; Mon, 03 Feb 2020 22:23:31 -0800 (PST) Received: from localhost.localdomain ([106.254.212.20]) by smtp.gmail.com with ESMTPSA id u26sm21880240pfn.46.2020.02.03.22.23.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Feb 2020 22:23:30 -0800 (PST) From: sj38.park@gmail.com To: akpm@linux-foundation.org Cc: SeongJae Park , acme@kernel.org, alexander.shishkin@linux.intel.com, amit@kernel.org, brendan.d.gregg@gmail.com, brendanhiggins@google.com, cai@lca.pw, colin.king@canonical.com, corbet@lwn.net, dwmw@amazon.com, jolsa@redhat.com, kirill@shutemov.name, mark.rutland@arm.com, mgorman@suse.de, minchan@kernel.org, mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org, rdunlap@infradead.org, rostedt@goodmis.org, sj38.park@gmail.com, vdavydov.dev@gmail.com, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 00/11] Introduce Data Access MONitor (DAMON) Date: Tue, 4 Feb 2020 06:23:01 +0000 Message-Id: <20200204062312.19913-1-sj38.park@gmail.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park Introduction =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Memory management decisions can normally be more efficient if finer data = access informations are available. However, because finer information usually c= omes with higher overhead, most systems including Linux made a tradeoff: Forgi= ve some wise decisions and use coarse information and/or light-weight heuris= tics. A number of experimental data access pattern awared memory management optimizations (refer to 'Appendix D' for more detail) say the sacrifices = are huge (2.55x slowdown). However, none of those has successfully adopted t= o Linux kernel mainly due to the absence of a scalable and efficient data a= ccess monitoring mechanism. Refer to 'Appendix C' to see the limitations of ex= isting memory monitoring mechanisms. DAMON is a data access monitoring solution for the problem. It is 1) acc= urate enough for the DRAM level memory management, 2) light-weight enough to be applied online, and 3) keeps predefined upper-bound overhead regardless o= f the size of target workloads (thus scalable). Refer to 'Appendix A: Mechanis= ms of DAMON' if you interested in how it is possible. DAMON is implemented as a standalone kernel module and provides several s= imple interfaces. Owing to that, though it has mainly designed for the kernel'= s memory management mechanisms, it can be also used for a wide range of use= r space programs and people. Refer to 'Appendix B: Expectd Use-cases' for = more detailed expected usages of DAMON. Frequently Asked Questions =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D Q: Why not integrated with perf? A: From the perspective of perf like profilers, DAMON can be thought of a= s a data source in kernel, like tracepoints, pressure stall information (psi)= , or idle page tracking. Thus, it can be easily integrated with those. Howev= er, this patchset doesn't provide a fancy perf integration because current st= ep of DAMON development is focused on its core logic only. That said, DAMON al= ready provides two interfaces for user space programs, which based on debugfs a= nd tracepoint, respectively. Using the tracepoint interface, you can use DA= MON with perf. This patchset also provides the debugfs interface based user = space tool for DAMON. It can be used to record, visualize, and analyze data ac= cess pattern of target processes in a convenient way. Q: Why a new module, instead of extending perf or other tools? A: First, DAMON aims to be used by other programs including the kernel. Therfore, having dependency to specific tools like perf is not desirable. Second, because it need to be lightweight as much as possible so that it = can be used online, any unnecessary overhead such as kernel - user space context switching cost should be avoided. These are the two most biggest reasons= why DAMON is implemented in the kernel space. The idle page tracking subsyst= em would be the kernel module that most seems similar to DAMON. However, it= 's own interface is not compatible with DAMON. Also, the internal implementatio= n of it has no common part to be reused by DAMON. Q: Can 'perf mem' provide the data required for DAMON? A: On the systems supporting 'perf mem', yes. DAMON is using the PTE Acc= essed bits in low level. Other H/W or S/W features that can be used for the pu= rpose could be used. However, as explained with above question, DAMON need to = be implemented in the kernel space. Evaluations =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D A prototype of DAMON has evaluated on an Intel Xeon E7-8837 machine using= 20 benchmarks that picked from SPEC CPU 2006, NAS, Tensorflow Benchmark, SPLASH-2X, and PARSEC 3 benchmark suite. Nonethless, this section provid= es only summary of the results. For more detail, please refer to the slides= used for the introduction of DAMON at the Linux Plumbers Conference 2019[1] or= the MIDDLEWARE'19 industrial track paper[2]. Quality ------- We first traced and visualized the data access pattern of each workload. = We were able to confirm that the visualized results are reasonably accurate = by manually comparing those with the source code of the workloads. To see the usefulness of the monitoring, we optimized 9 memory intensive workloads among them for memory pressure situations using the DAMON outpu= ts. In detail, we identified frequently accessed memory regions in each workl= oad based on the DAMON results and protected them with ``mlock()`` system cal= ls. The optimized versions consistently show speedup (2.55x in best case, 1.6= 5x in average) under memory pressure situation. Overhead -------- We also measured the overhead of DAMON. It was not only under the upperb= ound we set, but was much lower (0.6 percent of the bound in best case, 13.288 percent of the bound in average). This reduction of the overhead is main= ly resulted from the adaptive regions adjustment. We also compared the over= head with that of the straightforward periodic Accessed bit check-based monito= ring, which checks the access of every page frame. DAMON's overhead was much s= maller than the straightforward mechanism by 94,242.42x in best case, 3,159.61x = in average. References =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Prototypes of DAMON have introduced by an LPC kernel summit track talk[1]= and two academic papers[2,3]. Please refer to those for more detailed inform= ation, especially the evaluations. [1] SeongJae Park, Tracing Data Access Pattern with Bounded Overhead and Best-effort Accuracy. In The Linux Kernel Summit, September 2019. https://linuxplumbersconf.org/event/4/contributions/548/ [2] SeongJae Park, Yunjae Lee, Heon Y. Yeom, Profiling Dynamic Data Acces= s Patterns with Controlled Overhead and Quality. In 20th ACM/IFIP International Middleware Conference Industry, December 2019. https://dl.acm.org/doi/10.1145/3366626.3368125 [3] SeongJae Park, Yunjae Lee, Yunhee Kim, Heon Y. Yeom, Profiling Dynami= c Data Access Patterns with Bounded Overhead and Accuracy. In IEEE Internati= onal Workshop on Foundations and Applications of Self- Systems (FAS 2019),= June 2019. Sequence Of Patches =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The patches are organized in the following sequence. The first patch introduces DAMON module and it's small common functions. Following three patches (2nd to 4th) implement the core logics of DAMON, regions based sampling, adaptive regions adjustment, and dynamic memory mapping chage adoption, one by one. Next three patches (5th to 7th) adds interfaces of DAMON. Each of those = adds an api for other kernel code, a debugfs interface for super users and a tracepoint for other tracepoint supporting tracers such as perf. To provide a minimal reference to the debugfs interface and for more conv= enient use/tests of the DAMON, the next patch (8th) implements an user space too= l. The 9th patch adds a document for administrators of DAMON, and the 10th p= atch provides DAMON's kunit tests. Finally, the last patch (11th) updates the MAINTAINERS file. The patches are based on the v5.5. You can also clone the complete git tree: $ git clone git://github.com/sjp38/linux -b damon/patches/v3 The web is also available: https://github.com/sjp38/linux/releases/tag/damon/patches/v3 Patch History =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Changes from v2 (https://lore.kernel.org/linux-mm/20200128085742.14566-1-sjpark@amazon.co= m/) - Move MAINTAINERS changes to last commit (Brendan Higgins) - Add descriptions for kunittest: why not only entire mappings and what = the 4 input sets are trying to test (Brendan Higgins) - Remove 'kdamond_need_stop()' test (Brendan Higgins) - Discuss about the 'perf mem' and DAMON (Peter Zijlstra) - Make CV clearly say what it actually does (Peter Zijlstra) - Answer why new module (Qian Cai) - Diable DAMON by default (Randy Dunlap) - Change the interface: Seperate recording attributes (attrs, record, rules) and allow multiple kdamond instances - Implement kernel API interface Changes from v1 (https://lore.kernel.org/linux-mm/20200120162757.32375-1-sjpark@amazon.co= m/) - Rebase on v5.5 - Add a tracepoint for integration with other tracers (Kirill A. Shutemo= v) - document: Add more description for the user space tool (Brendan Higgin= s) - unittest: Improve readability (Brendan Higgins) - unittest: Use consistent name and helpers function (Brendan Higgins) - Update PG_Young to avoid reclaim logic interference (Yunjae Lee) Changes from RFC (https://lore.kernel.org/linux-mm/20200110131522.29964-1-sjpark@amazon.co= m/) - Specify an ambiguous plan of access pattern based mm optimizations - Support loadable module build - Cleanup code SeongJae Park (11): Introduce Data Access MONitor (DAMON) mm/damon: Implement region based sampling mm/damon: Adaptively adjust regions mm/damon: Apply dynamic memory mapping changes mm/damon: Implement kernel space API mm/damon: Add debugfs interface mm/damon: Add a tracepoint for result writing mm/damon: Add minimal user-space tools Documentation/admin-guide/mm: Add a document for DAMON mm/damon: Add kunit tests MAINTAINERS: Update for DAMON .../admin-guide/mm/data_access_monitor.rst | 414 +++++ Documentation/admin-guide/mm/index.rst | 1 + MAINTAINERS | 11 + include/linux/damon.h | 71 + include/trace/events/damon.h | 32 + mm/Kconfig | 23 + mm/Makefile | 1 + mm/damon-test.h | 604 +++++++ mm/damon.c | 1412 +++++++++++++++++ tools/damon/.gitignore | 1 + tools/damon/_dist.py | 35 + tools/damon/bin2txt.py | 64 + tools/damon/damo | 37 + tools/damon/heats.py | 358 +++++ tools/damon/nr_regions.py | 88 + tools/damon/record.py | 219 +++ tools/damon/report.py | 45 + tools/damon/wss.py | 94 ++ 18 files changed, 3510 insertions(+) create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst create mode 100644 include/linux/damon.h create mode 100644 include/trace/events/damon.h create mode 100644 mm/damon-test.h create mode 100644 mm/damon.c create mode 100644 tools/damon/.gitignore create mode 100644 tools/damon/_dist.py create mode 100644 tools/damon/bin2txt.py create mode 100755 tools/damon/damo create mode 100644 tools/damon/heats.py create mode 100644 tools/damon/nr_regions.py create mode 100644 tools/damon/record.py create mode 100644 tools/damon/report.py create mode 100644 tools/damon/wss.py --=20 2.17.1 ---- Appendix A: Mechanisms of DAMON =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D Basic Access Check ------------------ DAMON basically reports what pages are how frequently accessed. The repo= rt is passed to users in binary format via a ``result file`` which users can se= t it's path. Note that the frequency is not an absolute number of accesses, but= a relative frequency among the pages of the target workloads. Users can also control the resolution of the reports by setting two time intervals, ``sampling interval`` and ``aggregation interval``. In detail= , DAMON checks access to each page per ``sampling interval``, aggregates th= e results (counts the number of the accesses to each page), and reports the aggregated results per ``aggregation interval``. For the access check of= each page, DAMON uses the Accessed bits of PTEs. This is thus similar to the previously mentioned periodic access checks b= ased mechanisms, which overhead is increasing as the size of the target proces= s grows. Region Based Sampling --------------------- To avoid the unbounded increase of the overhead, DAMON groups a number of adjacent pages that assumed to have same access frequencies into a region= . As long as the assumption (pages in a region have same access frequencies) i= s kept, only one page in the region is required to be checked. Thus, for e= ach ``sampling interval``, DAMON randomly picks one page in each region and c= lears its Accessed bit. After one more ``sampling interval``, DAMON reads the Accessed bit of the page and increases the access frequency of the region= if the bit has set meanwhile. Therefore, the monitoring overhead is control= lable by setting the number of regions. DAMON allows users to set the minimal = and maximum number of regions for the trade-off. Except the assumption, this is almost same with the above-mentioned miniature-like static region based sampling. In other words, this scheme cannot preserve the quality of the output if the assumption is not guaran= teed. Adaptive Regions Adjustment --------------------------- At the beginning of the monitoring, DAMON constructs the initial regions = by evenly splitting the memory mapped address space of the process into the user-specified minimal number of regions. In this initial state, the assumption is normally not kept and thus the quality could be low. To ke= ep the assumption as much as possible, DAMON adaptively merges and splits each r= egion. For each ``aggregation interval``, it compares the access frequencies of adjacent regions and merges those if the frequency difference is small. = Then, after it reports and clears the aggregated access frequency of each regio= n, it splits each region into two regions if the total number of regions is sma= ller than the half of the user-specified maximum number of regions. In this way, DAMON provides its best-effort quality and minimal overhead = while keeping the bounds users set for their trade-off. Applying Dynamic Memory Mappings -------------------------------- Only a number of small parts in the super-huge virtual address space of t= he processes is mapped to physical memory and accessed. Thus, tracking the unmapped address regions is just wasteful. However, tracking every memor= y mapping change might incur an overhead. For the reason, DAMON applies th= e dynamic memory mapping changes to the tracking regions only for each of a= n user-specified time interval (``regions update interval``). Appendix B: Expected Use-cases =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D A straightforward usecase of DAMON would be the program behavior analysis= . With the DAMON output, users can confirm whether the program is running a= s intended or not. This will be useful for debuggings and tests of design points. The monitored results can also be useful for counting the dynamic working= set size of workloads. For the administration of memory overcommitted system= s or selection of the environments (e.g., containers providing different amoun= t of memory) for your workloads, this will be useful. If you are a programmer, you can optimize your program by managing the me= mory based on the actual data access pattern. For example, you can identify t= he dynamic hotness of your data using DAMON and call ``mlock()`` to keep you= r hot data in DRAM, or call ``madvise()`` with ``MADV_PAGEOUT`` to proactively reclaim cold data. Even though your program is guaranteed to not encount= er memory pressure, you can still improve the performance by applying the DA= MON outputs for call of ``MADV_HUGEPAGE`` and ``MADV_NOHUGEPAGE``. More crea= tive optimizations would be possible. Our evaluations of DAMON includes a straightforward optimization using the ``mlock()``. Please refer to the = below Evaluation section for more detail. As DAMON incurs very low overhead, such optimizations can be applied not = only offline, but also online. Also, there is no reason to limit such optimiz= ations to the user space. Several parts of the kernel's memory management mecha= nisms could be also optimized using DAMON. The reclamation, the THP (de)promoti= on decisions, and the compaction would be such a candidates. DAMON will con= tinue its development to be highly optimized for the online/in-kernel uses. A Future Plan: Data Access Based Optimizations Support ------------------------------------------------------ As described in the above section, DAMON could be helpful for actual acce= ss based memory management optimizations. Nevertheless, users who want to d= o such optimizations should run DAMON, read the traced data (either online or offline), analyze it, plan a new memory management scheme, and apply the = new scheme by themselves. It must be easier than the past, but could still r= equire some level of efforts. In its next development stage, DAMON will reduce = some of such efforts by allowing users to specify some access based memory management rules for their specific processes. Because this is just a plan, the specific interface is not fixed yet, but= for example, users will be allowed to write their desired memory management r= ules to a special file in a DAMON specific format. The rules will be somethin= g like 'if a memory region of size in a range is keeping a range of hotness for = more than a duration, apply specific memory management rule using madvise() or mlock() to the region'. For example, we can imagine rules like below: # format is: # if a region of a size keeps a very high access frequency for more t= han # 100ms, lock the region in the main memory (call mlock()). But, if t= he # region is larger than 500 MiB, skip it. The exception might be help= ful # if the system has only, say, 600 MiB of DRAM, a region of size larg= er # than 600 MiB cannot be locked in the DRAM at all. na 500M 90 99 100ms mlock # if a region keeps a high access frequency for more than 100ms, put = the # region on the head of the LRU list (call madvise() with MADV_WILLNE= ED). na na 80 90 100ms madv_willneed # if a region keeps a low access frequency for more than 100ms, put t= he # region on the tail of the LRU list (call madvise() with MADV_COLD). na na 10 20 100ms madv_cold # if a region keeps a very low access frequency for more than 100ms, = swap # out the region immediately (call madvise() with MADV_PAGEOUT). na na 0 10 100ms madv_pageout # if a region of a size bigger than 2MB keeps a very high access freq= uency # for more than 100ms, let the region to use huge pages (call madvise= () # with MADV_HUGEPAGE). 2M na 90 99 100ms madv_hugepage # If a regions of a size bigger than > 2MB keeps no high access frequ= ency # for more than 100ms, avoid the region from using huge pages (call # madvise() with MADV_NOHUGEPAGE). 2M na 0 25 100ms madv_nohugepage Appendix C: Limitations of Other Access Monitoring Techniques =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The memory access instrumentation techniques which are applied to many tools such as Intel PIN is essential for correctness required cases = such as memory access bug detections or cache level optimizations. However, t= hose usually incur exceptionally high overhead which is unacceptable. Periodic access checks based on access counting features (e.g., PTE Acces= sed bits or PG_Idle flags) can reduce the overhead. It sacrifies some of the quality but it's still ok to many of this domain. However, the overhead arbitrarily increase as the size of the target workload grows. Miniature= -like static region based sampling can set the upperbound of the overhead, but = it will now decrease the quality of the output as the size of the workload g= rows. DAMON is another solution that overcomes the limitations. It is 1) accur= ate enough for this domain, 2) light-weight so that it can be applied online,= and 3) allow users to set the upper-bound of the overhead, regardless of the = size of target workloads. It is implemented as a simple and small kernel modu= le to support various users in both of the user space and the kernel space. Re= fer to 'Evaluations' section below for detailed performance of DAMON. For the goals, DAMON utilizes its two core mechanisms, which allows light= weight overhead and high quality of output, repectively. To show how DAMON prom= ises those, refer to 'Mechanisms of DAMON' section below. Appendix D: Related Works =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D There are a number of researches[1,2,3,4,5,6] optimizing memory managemen= t mechanisms based on the actual memory access patterns that shows impressi= ve results. However, most of those has no deep consideration about the moni= toring of the accesses itself. Some of those focused on the overhead of the monitoring, but does not consider the accuracy scalability[6] or has addi= tional dependencies[7]. Indeed, one recent research[5] about the proactive reclamation has also proposed[8] to the kernel community but the monitori= ng overhead was considered a main problem. [1] Subramanya R Dulloor, Amitabha Roy, Zheguang Zhao, Narayanan Sundaram= , Nadathur Satish, Rajesh Sankaran, Jeff Jackson, and Karsten Schwan. 2= 016. Data tiering in heterogeneous memory systems. In Proceedings of the 1= 1th European Conference on Computer Systems (EuroSys). ACM, 15. [2] Youngjin Kwon, Hangchen Yu, Simon Peter, Christopher J Rossbach, and = Emmett Witchel. 2016. Coordinated and efficient huge page management with in= gens. In 12th USENIX Symposium on Operating Systems Design and Implementati= on (OSDI). 705=E2=80=93721. [3] Harald Servat, Antonio J Pe=C3=B1a, Germ=C3=A1n Llort, Estanislao Mer= cadal, HansChristian Hoppe, and Jes=C3=BAs Labarta. 2017. Automating the app= lication data placement in hybrid memory systems. In 2017 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 126=E2=80=93136. [4] Vlad Nitu, Boris Teabe, Alain Tchana, Canturk Isci, and Daniel Hagimo= nt. 2018. Welcome to zombieland: practical and energy-efficient memory disaggregation in a datacenter. In Proceedings of the 13th European Conference on Computer Systems (EuroSys). ACM, 16. [5] Andres Lagar-Cavilla, Junwhan Ahn, Suleiman Souhlal, Neha Agarwal, Ra= doslaw Burny, Shakeel Butt, Jichuan Chang, Ashwin Chaugule, Nan Deng, Junaid Shahid, Greg Thelen, Kamil Adam Yurtsever, Yu Zhao, and Parthasarathy Ranganathan. 2019. Software-Defined Far Memory in Warehouse-Scale Computers. In Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). ACM, New York, NY, USA, 317=E2=80=93330. DOI:https://doi.org/10.1145/3297858.3304053 [6] Carl Waldspurger, Trausti Saemundsson, Irfan Ahmad, and Nohhyun Park. 2017. Cache Modeling and Optimization using Miniature Simulations. In= 2017 USENIX Annual Technical Conference (ATC). USENIX Association, Santa Clara, CA, 487=E2=80=93498. https://www.usenix.org/conference/atc17/technical-sessions/ [7] Haojie Wang, Jidong Zhai, Xiongchao Tang, Bowen Yu, Xiaosong Ma, and Wenguang Chen. 2018. Spindle: Informed Memory Access Monitoring. In 2= 018 USENIX Annual Technical Conference (ATC). USENIX Association, Boston,= MA, 561=E2=80=93574. https://www.usenix.org/conference/atc18/presentatio= n/wang-haojie [8] Jonathan Corbet. 2019. Proactively reclaiming idle memory. (2019). https://lwn.net/Articles/787611/.