From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F69F3563D4 for ; Thu, 7 May 2026 22:22:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778192555; cv=none; b=CJJHorw00Pa/F0t2VTJGNXEfZSVQEy7HA0kNJA5ajv+3dzulkaqmWAo339VpFtQwGSV78WV/qMJgfbYC4i96+1sJ67vwpWyFjYWdKq+3tPFO+e1DSC9Ec8xS+zzR8tpOL26HNauMBq0NNOusJWFx0PzAPFEgvx5FJ+poHXZeS/8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778192555; c=relaxed/simple; bh=8OrY972c6+7ByAV3XWoqQyFTGv447QFGQo7qnD34BuE=; h=Date:From:To:Subject:Message-ID:MIME-Version:Content-Type: Content-Disposition; b=j3GclSBISXM72+QRYuFgcIQUQOtpH/90UfT96sGcJ8qsr8aDH4GFJsXWSeP9gSaqC+gbIi24AUoxXMf0x1y+7gzaqYgITrQxz7bIxGXRqIXKPxCZqE3CdC2BFYbnSDfqb2sAgcGJng7+oC2Ng1M14OPjs+tcaMbGkk/cyfFXP1A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=JdLzcpjD; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=MJdbPDoz; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JdLzcpjD"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="MJdbPDoz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778192552; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type; bh=fIbxX2YAPG9G8pUZVVs4NQKJhpy9oLrx5VpmvT9o3Ao=; b=JdLzcpjDK1O7o/xWbP+w7Tewm+pOcm0q+bgmHJyZQDz8XVFP2Ycw922lM6Xzr8As8x+oFW oo8t970P0G/4Svz8wT8y3sC8G7HWScK3Kss1h0bpmnR1uR/Ez0nmSF/wo1HINFbOFG9Vas r2pCBrKDx3l6mLmWOPDapqlP4snHw8I= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-549-Y0Fuhz8OMNqXv6T5lQrtBw-1; Thu, 07 May 2026 18:22:31 -0400 X-MC-Unique: Y0Fuhz8OMNqXv6T5lQrtBw-1 X-Mimecast-MFC-AGG-ID: Y0Fuhz8OMNqXv6T5lQrtBw_1778192550 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-486fa07f2bbso8216385e9.2 for ; Thu, 07 May 2026 15:22:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1778192549; x=1778797349; darn=vger.kernel.org; h=content-disposition:mime-version:message-id:subject:to:from:date :from:to:cc:subject:date:message-id:reply-to; bh=fIbxX2YAPG9G8pUZVVs4NQKJhpy9oLrx5VpmvT9o3Ao=; b=MJdbPDozB6eGfL9RobR99d0MXmkf9emYfNrNkIDsAvPFJT41bSb/f6jz4gcCAqZtNo cgj0vLLnLaSDqkPo2Jva1DWY1Po3m0J6detrDLBUxKoRpDWGmFSqT3thg83IDHtljwWy DmjcVLKnmdUNy3oVE6UVt3U3uysRT8uDaR129FtoXFpZtxI2x+prUjllXVSaafrYxQwN WQ/ovcu81c0xEV+GKWtJ8czq3nqwWfxJWI6a37rTeyL6G8QEBvEgzbMC1K2iRt96S6gQ 6qTDSvgiRuzgxQ4yd/5WJ+KhJNqXAIqsV2+HRmDyrd6xySoj6v1gAg2Nofcl5Um3lsvU dodg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778192549; x=1778797349; h=content-disposition:mime-version:message-id:subject:to:from:date :x-gm-gg:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fIbxX2YAPG9G8pUZVVs4NQKJhpy9oLrx5VpmvT9o3Ao=; b=GqS4pQTGOeODwQlavf7bUDlgkJzcGLHF8sx5dk3nMho/HdMh/OC0JttF8CGO7EnbC7 M7ulpeiPVq4bsMPuDZQO+Aio7eklf7ca006PEKVkp6aDd9NBhLN32RogKkWVnSLs7C68 zCVHpsPPJsGAhh6UL6ACSxsMCV+PywALzBpfyFHhs52BbN37VAIF7i9WNfJC5zCQGvnD tTSiScz5YtPIGv5u5QsGmeenWuH/wnTELFQA8X5LGJlHl64WiaG84sCLIqrUM/dFL130 4YaJeI6hrU+64sJduq8PZVmStefZK2MHg2Im60nREST190qlmsOweo03AMzXVWwNpWuu xa8g== X-Gm-Message-State: AOJu0YwkYCG9YrhIpcfL7mxg55s8bRvIDB+3Dg5ZM5vhYsA1bE392lvR la+DuW2z1Uipc2xns47Uw4BC11oQY6Z+fl09dZA79AlosDLemAZtb16qQiVWC0My/4VcQQZWSB/ ZWzOZphfMd0KzMIkOycgQMTAeWHqDJY2HnQRFFgKa/Oh7Cu6+cd8RHRaERoCYyjFZkPFRgGwuGA UerL+CezYksNGhGOhJdPzn2nlzs1h2iCqHKjBw1xceUmk= X-Gm-Gg: AeBDievA0BBzgceAj+YR6D4hUNDJB+5Zixu3l/ngIVAN3homFFPaAE7jBCErSeq/ks/ Yf2X7H4fRFNZGP+q/6NloPVvsy6d6cK6SK9S2cIRpoVbuiJJZsQ2SYkFbhWhDN2wkmLtWMNOLO0 yb06/Pt0HJ621PuCSsNpavT5cMIM091dVzmSo1Fj2fChvO0CLtcnOKM+lqBwSh3zpLJ438Nv/Zq hCJ9XNrKW8ozvUKcD8NBJKGLRU/zJANtTsyXgLQDJj3jnuQwu9LJjOad3KK+0Tq3BvlWz5Y1lCR EneOuKnpVsj7W2rtth28EqDQdoD49ObKDv8+osEBgo88DAgGpBQ/RuWfybfctls8wgQatknwy5w ExX+Xq7vjQdqvhgHqXg/2ddOyAf3sbtvhnl5eUq4d X-Received: by 2002:a05:600c:630c:b0:48e:6275:27e1 with SMTP id 5b1f17b1804b1-48e6275282fmr44354315e9.26.1778192549134; Thu, 07 May 2026 15:22:29 -0700 (PDT) X-Received: by 2002:a05:600c:630c:b0:48e:6275:27e1 with SMTP id 5b1f17b1804b1-48e6275282fmr44353975e9.26.1778192548195; Thu, 07 May 2026 15:22:28 -0700 (PDT) Received: from redhat.com (IGLD-80-230-48-7.inter.net.il. [80.230.48.7]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48e642eb9e8sm5314525e9.9.2026.05.07.15.22.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 May 2026 15:22:27 -0700 (PDT) Date: Thu, 7 May 2026 18:22:26 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Subject: [PATCH v5 00/28] mm/virtio: skip redundant zeroing of host-zeroed pages Message-ID: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent When a guest reports free pages to the hypervisor via virtio-balloon's free page reporting, the host typically zeros those pages when reclaiming their backing memory (e.g., via MADV_DONTNEED on anonymous mappings). When the guest later reallocates those pages, the kernel zeros them again, redundantly. Further, on architectures with aliasing caches, upstream with init_on_alloc double-zeros user pages: once via kernel_init_pages() in post_alloc_hook, and again via clear_user_highpage() at the callsite (because user_alloc_needs_zeroing() returns true). This series eliminates that double-zeroing by moving the zeroing into the post_alloc_hook + propagating the "host already zeroed this page" information through the buddy allocator. For page reporting, VIRTIO_BALLOON_F_DEVICE_INIT_REPORTED (bit 6) is used. For the inflate/deflate path, VIRTIO_BALLOON_F_DEVICE_INIT_ON_INFLATE (bit 7) is used. Virtio spec: https://lore.kernel.org/all/cover.1778140241.git.mst@redhat.com The first 13 patches are independently mergeable cleanups/fixes: - Patches 1-12: mm rework (init_on_alloc double-zeroing fix). - Patch 13: page_reporting capacity bugfix. ------- Performance with THP enabled on a 2GB VM, 1 vCPU, allocating 256MB of anonymous pages: metric baseline optimized delta task-clock 232 +- 20 ms 51 +- 26 ms -78% cache-misses 1.20M +- 248K 288K +- 102K -76% instructions 16.3M +- 1.2M 13.8M +- 1.0M -15% With hugetlb surplus pages: metric baseline optimized delta task-clock 219 +- 23 ms 65 +- 34 ms -70% cache-misses 1.17M +- 391K 263K +- 36K -78% instructions 17.9M +- 1.2M 15.1M +- 724K -16% Two flags track known-zero pages: PG_zeroed (aliased to PG_private) marks buddy allocator pages that are known to contain all zeros -- either because the host zeroed them during page reporting, or because they were freed via the balloon deflate path. It lives on free-list pages and is consumed by post_alloc_hook() on allocation. HPG_zeroed (stored in hugetlb folio->private bits) serves the same purpose for hugetlb pool pages, which are kept in a pool and may be zeroed long after buddy allocation, so PG_zeroed (consumed at allocation time) cannot track their state. PG_zeroed lifecycle: Sets PG_zeroed: - page_reporting_drain: on reported pages when host zeroes them - __free_pages_ok / __free_frozen_pages: when FPI_ZEROED is set (balloon deflate path) - buddy merge: on merged page if both buddies were zeroed - expand(): propagate to split-off buddy sub-pages Clears PG_zeroed: - __free_pages_prepare: clears all PAGE_FLAGS_CHECK_AT_PREP flags (PG_zeroed included), preventing PG_private aliasing leaks - rmqueue_buddy / __rmqueue_pcplist: read-then-clear, passes zeroed hint to prep_new_page -> post_alloc_hook - __isolate_free_page: clear (compaction/page_reporting isolation) - compaction, alloc_contig, split_free_frozen: clear before use - buddy merge: clear both pages before merge, then conditionally re-set on merged head if both were zeroed HPG_zeroed lifecycle (hugetlb pool pages, stored in folio->private): Sets HPG_zeroed: - alloc_surplus_hugetlb_folio: after buddy allocation with __GFP_ZERO, mark pool page as known-zero Clears HPG_zeroed: - free_huge_folio: page was mapped to userspace, no longer known-zero when it returns to the pool - alloc_hugetlb_folio / alloc_hugetlb_folio_reserve: clear after reporting to caller via bool *zeroed output (consumed) - The optimization is most effective with THP, where entire 2MB pages are allocated directly from reported order-9+ buddy pages. Without THP, only ~21% of order-0 allocations come from reported pages due to low-order fragmentation. - Persistent hugetlb pool pages are not covered: when freed by userspace they return to the hugetlb free pool, not the buddy allocator, so they are never reported to the host. Surplus hugetlb pages are allocated from buddy and do benefit. - PG_zeroed is aliased to PG_private. __free_pages_prepare() clears it (preventing filesystem PG_private from leaking as false PG_zeroed). FPI_ZEROED re-sets it after prepare for balloon deflate pages. Is aliasing PG_private acceptable, or should a different bit be used? - On architectures with aliasing caches, upstream with init_on_alloc double-zeros user pages: once via kernel_init_pages() in post_alloc_hook, and again via clear_user_highpage() at the callsite (because user_alloc_needs_zeroing() returns true). Our patches eliminate this by zeroing once via folio_zero_user() in post_alloc_hook. Not a critical fix (people who set init_on_alloc know they are paying performance) but a nice cleanup anyway. Test program: #include #include #include #include #ifndef MADV_POPULATE_WRITE #define MADV_POPULATE_WRITE 23 #endif #ifndef MAP_HUGETLB #define MAP_HUGETLB 0x40000 #endif int main(int argc, char **argv) { unsigned long size; int flags = MAP_PRIVATE | MAP_ANONYMOUS; void *p; int r; if (argc < 2) { fprintf(stderr, "usage: %s [huge]\n", argv[0]); return 1; } size = atol(argv[1]) * 1024UL * 1024; if (argc >= 3 && strcmp(argv[2], "huge") == 0) flags |= MAP_HUGETLB; p = mmap(NULL, size, PROT_READ | PROT_WRITE, flags, -1, 0); if (p == MAP_FAILED) { perror("mmap"); return 1; } r = madvise(p, size, MADV_POPULATE_WRITE); if (r) { perror("madvise"); return 1; } munmap(p, size); return 0; } Test script (bench.sh): #!/bin/bash # Usage: bench.sh [huge] # mode 0 = baseline, mode 1 = skip zeroing SZ=${1:-256}; MODE=${2:-0}; ITER=${3:-10}; HUGE=${4:-} FLUSH=/sys/module/page_reporting/parameters/flush PERF_DATA=/tmp/perf-$MODE.csv rmmod virtio_balloon 2>/dev/null insmod virtio_balloon.ko host_zeroes_pages=$MODE echo 512 > $FLUSH [ "$HUGE" = "huge" ] && echo $((SZ/2)) > /proc/sys/vm/nr_overcommit_hugepages rm -f $PERF_DATA echo "=== sz=${SZ}MB mode=$MODE iter=$ITER $HUGE ===" for i in $(seq 1 $ITER); do echo 3 > /proc/sys/vm/drop_caches echo 512 > $FLUSH perf stat -e task-clock,instructions,cache-misses \ -x, -o $PERF_DATA --append -- ./alloc_once $SZ $HUGE done [ "$HUGE" = "huge" ] && echo 0 > /proc/sys/vm/nr_overcommit_hugepages rmmod virtio_balloon awk -F, '/^#/||/^$/{next}{v=$1+0;e=$3;gsub(/ /,"",e);s[e]+=v;ss[e]+=v*v;n[e]++} END{for(e in s){a=s[e]/n[e];d=sqrt(ss[e]/n[e]-a*a);printf " %-16s %10.0f +- %8.0f (n=%d)\n",e,a,d,n[e]}}' $PERF_DATA Compile and run: gcc -static -O2 -o alloc_once alloc_once.c bash bench.sh 256 0 10 # baseline (regular pages) bash bench.sh 256 1 10 # optimized (regular pages) bash bench.sh 256 0 10 huge # baseline (hugetlb surplus) bash bench.sh 256 1 10 huge # optimized (hugetlb surplus) Changes since v4: With virtio spec posted, update to latest spec: - Add VIRTIO_BALLOON_F_DEVICE_INIT_REPORTED (bit 6) for reporting. - Add VIRTIO_BALLOON_F_DEVICE_INIT_ON_INFLATE (bit 7) for inflate. - Per-page virtqueue submission, per-page used_len feedback. - Balloon migration preserves PageZeroed hint. - Page_reporting capacity bugfix for small virtqueues. - PG_zeroed propagation in split_large_buddy. - Disable both features for confidential computing guests. - Gate host_zeroes_pages on PAGE_POISON/poison_val: when PAGE_POISON is negotiated with non-zero poison_val, device fills with poison not zeros, so host_zeroes_pages must be false. - Disable ON_INFLATE when PAGE_POISON with non-zero poison_val. - Bound inflate bitmap reads by used_len from device. - Move ON_INFLATE poison_val check to validate() for proper feature negotiation. - Fix NUMA interleave index for unaligned VMA start (new patch 1). - Drop vma_alloc_folio_user_addr: with the ilx fix, callers can pass raw fault address to vma_alloc_folio directly. - Tested with DEBUG_VM, INIT_ON_ALLOC/FREE enabled. Changes since v3 (address review by Gregory Price and David Hildenbrand): - Keep user_addr threading internal: public APIs (__alloc_pages, __folio_alloc, folio_alloc_mpol) are unchanged. Only internal functions (__alloc_frozen_pages_noprof, __alloc_pages_mpol) carry user_addr. This eliminates all API churn for external callers. - Add vma_alloc_folio_user_addr() (2/22) to separate NUMA policy address from the zeroing hint address. Fixes NUMA interleave index corruption when passing unaligned fault address for higher-order allocations. - Add per-page zeroed_bitmap to page_reporting_dev_info (17/22). The driver's report() callback manages the bitmap. Drain checks it gated by the host_zeroes_pages static key. This matches the proposed virtio balloon extension at https://lore.kernel.org/all/cover.1776874126.git.mst@redhat.com/ - Clear PG_zeroed in __isolate_free_page() to prevent the aliased PG_private flag from leaking to compaction/alloc_contig paths. - Do not exclude PG_zeroed from PAGE_FLAGS_CHECK_AT_PREP macro. Instead, __free_pages_prepare() clears it (preventing filesystem PG_private leaking as false PG_zeroed), and FPI_ZEROED sets it after prepare. Only buddy merge assertion is relaxed. - Initialize alloc_context.user_addr in alloc_pages_bulk_noprof. - Deflate and hugetlb changes are much smaller now. Still, the patchset can be merged gradually, if desired. Changes since v2 (address review by Gregory Price and David Hildenbrand): - v2 used pghint_t / vma_alloc_folio_hints API. v3 switches to threading user_addr through the page allocator and using __GFP_ZERO, so post_alloc_hook() can use folio_zero_user() for cache-friendly zeroing when the user fault address is known. - Use FPI_ZEROED to set PG_zeroed after __free_pages_prepare() instead of runtime masking in __free_one_page (further refined in v4). - Drop redundant page_poisoning_enabled() check from mm core free path -- already guarded at feature negotiation time in virtio_balloon_validate. The balloon driver keeps its own page_poisoning_enabled_static() check as defense in depth. - Split free_frozen_pages_zeroed and put_page_zeroed into separate patches. David Hildenbrand indicated he intends to rework balloon pages to be frozen (no refcount), at which point put_page_zeroed (21/22) can be dropped and the balloon can call free_frozen_pages_zeroed directly. - Use HPG_zeroed flag (in hugetlb folio->private) for hugetlb pool pages instead of PG_zeroed, since pool pages are zeroed long after buddy allocation and PG_zeroed is consumed at allocation time. - syzbot CI found a PF_NO_COMPOUND BUG in the v2 pghint_t approach where __ClearPageZeroed was called on compound hugetlb pages in free_huge_folio. The v3 HPG_zeroed approach avoids this. - Remove redundant arch vma_alloc_zeroed_movable_folio overrides on x86, s390, m68k, and alpha (12/22). Suggested by David Hildenbrand. - Updated benchmarking script to compute per-run avg +- stddev via awk on CSV output. Changes v1->v2: - Replaced __GFP_PREZEROED with PG_zeroed page flag (aliased PG_private) - Added pghint_t type and vma_alloc_folio_hints() API - Track PG_zeroed across buddy merges and splits - Added post_alloc_hook integration (single consume/clear point) - Added hugetlb support (pool pages + memfd) - Added page_reporting flush parameter for deterministic testing - Added free_frozen_pages_hint/put_page_hint for balloon deflate path - Added try_to_claim_block PG_zeroed preservation - Updated perf numbers with per-iteration flush methodology Written with assistance from Claude (claude-opus-4-6). Reviewed by cursor-agent (GPT-5.4-xhigh). Everything manually read, patchset split and commit logs edited manually. Michael S. Tsirkin (28): mm: mempolicy: fix interleave index for unaligned VMA start mm: thread user_addr through page allocator for cache-friendly zeroing mm: add folio_zero_user stub for configs without THP/HUGETLBFS mm: page_alloc: move prep_compound_page before post_alloc_hook mm: use folio_zero_user for user pages in post_alloc_hook mm: use __GFP_ZERO in vma_alloc_zeroed_movable_folio mm: alloc_anon_folio: pass raw fault address to vma_alloc_folio mm: use __GFP_ZERO in alloc_anon_folio mm: vma_alloc_anon_folio_pmd: pass raw fault address to vma_alloc_folio mm: use __GFP_ZERO in vma_alloc_anon_folio_pmd mm: hugetlb: use __GFP_ZERO and skip zeroing for zeroed pages mm: memfd: skip zeroing for zeroed hugetlb pool pages mm: remove arch vma_alloc_zeroed_movable_folio overrides mm: page_reporting: allow driver to set batch capacity mm: page_alloc: propagate PageReported flag across buddy splits mm: page_reporting: skip redundant zeroing of host-zeroed reported pages mm: page_reporting: add per-page zeroed bitmap for host feedback mm: page_alloc: clear PG_zeroed on buddy merge if not both zero mm: page_alloc: preserve PG_zeroed in page_del_and_expand virtio_balloon: submit reported pages as individual buffers mm: page_reporting: add flush parameter with page budget mm: page_alloc: propagate PG_zeroed in split_large_buddy mm: add free_frozen_pages_zeroed mm: add put_page_zeroed and folio_put_zeroed virtio_balloon: implement VIRTIO_BALLOON_F_DEVICE_INIT_ON_INFLATE mm: balloon: use put_page_zeroed for zeroed balloon pages virtio_balloon: disable reporting zeroed optimization for confidential guests virtio_balloon: implement VIRTIO_BALLOON_F_DEVICE_INIT_REPORTED arch/alpha/include/asm/page.h | 3 - arch/m68k/include/asm/page_no.h | 3 - arch/s390/include/asm/page.h | 3 - arch/x86/include/asm/page.h | 3 - drivers/virtio/virtio_balloon.c | 165 +++++++++++++++++++++----- fs/hugetlbfs/inode.c | 10 +- include/linux/gfp.h | 3 +- include/linux/highmem.h | 9 +- include/linux/hugetlb.h | 14 ++- include/linux/mm.h | 15 +++ include/linux/page-flags.h | 9 ++ include/linux/page_reporting.h | 13 +++ include/uapi/linux/virtio_balloon.h | 2 + mm/balloon.c | 7 +- mm/compaction.c | 7 +- mm/huge_memory.c | 12 +- mm/hugetlb.c | 99 +++++++++++----- mm/internal.h | 17 ++- mm/memfd.c | 14 ++- mm/memory.c | 17 +-- mm/mempolicy.c | 45 +++++-- mm/page_alloc.c | 174 ++++++++++++++++++++++------ mm/page_reporting.c | 88 +++++++++++--- mm/page_reporting.h | 12 ++ mm/slub.c | 4 +- mm/swap.c | 18 ++- 26 files changed, 579 insertions(+), 187 deletions(-) -- MST